Test Report: Docker_Linux_crio 17102

                    
                      38d5550e53f52b04c4b197c514428c4ecd9b2e1a:2023-08-21:30667
                    
                

Test fail (7/304)

x
+
TestAddons/parallel/Ingress (152.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-351207 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-351207 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-351207 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [22a3a6fa-2940-438f-ad25-464d54d32d34] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [22a3a6fa-2940-438f-ad25-464d54d32d34] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.00787366s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p addons-351207 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-351207 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.756393186s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context addons-351207 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p addons-351207 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p addons-351207 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p addons-351207 addons disable ingress-dns --alsologtostderr -v=1: (1.409100467s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p addons-351207 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p addons-351207 addons disable ingress --alsologtostderr -v=1: (7.574501274s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-351207
helpers_test.go:235: (dbg) docker inspect addons-351207:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9ad89cf2faa2f5edc93411156710dc311b3eb3de4b27bb438072822b7f60994c",
	        "Created": "2023-08-21T10:34:11.178684959Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 14163,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-21T10:34:11.46874008Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/9ad89cf2faa2f5edc93411156710dc311b3eb3de4b27bb438072822b7f60994c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9ad89cf2faa2f5edc93411156710dc311b3eb3de4b27bb438072822b7f60994c/hostname",
	        "HostsPath": "/var/lib/docker/containers/9ad89cf2faa2f5edc93411156710dc311b3eb3de4b27bb438072822b7f60994c/hosts",
	        "LogPath": "/var/lib/docker/containers/9ad89cf2faa2f5edc93411156710dc311b3eb3de4b27bb438072822b7f60994c/9ad89cf2faa2f5edc93411156710dc311b3eb3de4b27bb438072822b7f60994c-json.log",
	        "Name": "/addons-351207",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-351207:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-351207",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/49acd6fcc7cb22d63c4bc06e10ffc8c7dc101bcc33c5fd58857cc31c5bff7a9d-init/diff:/var/lib/docker/overlay2/524bb0f129210e266d288d085768bab72d4735717d72ebbb4611a7bc558cb4ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49acd6fcc7cb22d63c4bc06e10ffc8c7dc101bcc33c5fd58857cc31c5bff7a9d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49acd6fcc7cb22d63c4bc06e10ffc8c7dc101bcc33c5fd58857cc31c5bff7a9d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49acd6fcc7cb22d63c4bc06e10ffc8c7dc101bcc33c5fd58857cc31c5bff7a9d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-351207",
	                "Source": "/var/lib/docker/volumes/addons-351207/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-351207",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-351207",
	                "name.minikube.sigs.k8s.io": "addons-351207",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "34575173af84ce225684b446c5a7f6bc955520b55bfdcea281b66c5591e7f9a9",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/34575173af84",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-351207": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "9ad89cf2faa2",
	                        "addons-351207"
	                    ],
	                    "NetworkID": "1f25f61694ce1ec863c78278bb741e2df1d1e84f0cf29e3b5380c13d5d9b355c",
	                    "EndpointID": "b197780bec5244534709d1c4579316d0db036d0f1569b8f3b8771dbe8e17d3a4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-351207 -n addons-351207
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-351207 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-351207 logs -n 25: (1.117187855s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-866840   | jenkins | v1.31.2 | 21 Aug 23 10:33 UTC |                     |
	|         | -p download-only-866840           |                        |         |         |                     |                     |
	|         | --force --alsologtostderr         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	|         | --driver=docker                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	| start   | -o=json --download-only           | download-only-866840   | jenkins | v1.31.2 | 21 Aug 23 10:33 UTC |                     |
	|         | -p download-only-866840           |                        |         |         |                     |                     |
	|         | --force --alsologtostderr         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.4      |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	|         | --driver=docker                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	| start   | -o=json --download-only           | download-only-866840   | jenkins | v1.31.2 | 21 Aug 23 10:33 UTC |                     |
	|         | -p download-only-866840           |                        |         |         |                     |                     |
	|         | --force --alsologtostderr         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0-rc.1 |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	|         | --driver=docker                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	| delete  | --all                             | minikube               | jenkins | v1.31.2 | 21 Aug 23 10:33 UTC | 21 Aug 23 10:33 UTC |
	| delete  | -p download-only-866840           | download-only-866840   | jenkins | v1.31.2 | 21 Aug 23 10:33 UTC | 21 Aug 23 10:33 UTC |
	| delete  | -p download-only-866840           | download-only-866840   | jenkins | v1.31.2 | 21 Aug 23 10:33 UTC | 21 Aug 23 10:33 UTC |
	| start   | --download-only -p                | download-docker-557830 | jenkins | v1.31.2 | 21 Aug 23 10:33 UTC |                     |
	|         | download-docker-557830            |                        |         |         |                     |                     |
	|         | --alsologtostderr                 |                        |         |         |                     |                     |
	|         | --driver=docker                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	| delete  | -p download-docker-557830         | download-docker-557830 | jenkins | v1.31.2 | 21 Aug 23 10:33 UTC | 21 Aug 23 10:33 UTC |
	| start   | --download-only -p                | binary-mirror-393575   | jenkins | v1.31.2 | 21 Aug 23 10:33 UTC |                     |
	|         | binary-mirror-393575              |                        |         |         |                     |                     |
	|         | --alsologtostderr                 |                        |         |         |                     |                     |
	|         | --binary-mirror                   |                        |         |         |                     |                     |
	|         | http://127.0.0.1:36497            |                        |         |         |                     |                     |
	|         | --driver=docker                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-393575           | binary-mirror-393575   | jenkins | v1.31.2 | 21 Aug 23 10:33 UTC | 21 Aug 23 10:33 UTC |
	| start   | -p addons-351207                  | addons-351207          | jenkins | v1.31.2 | 21 Aug 23 10:33 UTC | 21 Aug 23 10:35 UTC |
	|         | --wait=true --memory=4000         |                        |         |         |                     |                     |
	|         | --alsologtostderr                 |                        |         |         |                     |                     |
	|         | --addons=registry                 |                        |         |         |                     |                     |
	|         | --addons=metrics-server           |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots          |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver      |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                 |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner            |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget         |                        |         |         |                     |                     |
	|         | --driver=docker                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	|         | --addons=ingress                  |                        |         |         |                     |                     |
	|         | --addons=ingress-dns              |                        |         |         |                     |                     |
	|         | --addons=helm-tiller              |                        |         |         |                     |                     |
	| addons  | enable headlamp                   | addons-351207          | jenkins | v1.31.2 | 21 Aug 23 10:35 UTC | 21 Aug 23 10:35 UTC |
	|         | -p addons-351207                  |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1            |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p          | addons-351207          | jenkins | v1.31.2 | 21 Aug 23 10:35 UTC | 21 Aug 23 10:35 UTC |
	|         | addons-351207                     |                        |         |         |                     |                     |
	| ip      | addons-351207 ip                  | addons-351207          | jenkins | v1.31.2 | 21 Aug 23 10:36 UTC | 21 Aug 23 10:36 UTC |
	| addons  | addons-351207 addons disable      | addons-351207          | jenkins | v1.31.2 | 21 Aug 23 10:36 UTC | 21 Aug 23 10:36 UTC |
	|         | registry --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                              |                        |         |         |                     |                     |
	| addons  | addons-351207 addons disable      | addons-351207          | jenkins | v1.31.2 | 21 Aug 23 10:36 UTC | 21 Aug 23 10:36 UTC |
	|         | helm-tiller --alsologtostderr     |                        |         |         |                     |                     |
	|         | -v=1                              |                        |         |         |                     |                     |
	| addons  | addons-351207 addons              | addons-351207          | jenkins | v1.31.2 | 21 Aug 23 10:36 UTC | 21 Aug 23 10:36 UTC |
	|         | disable metrics-server            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1            |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p       | addons-351207          | jenkins | v1.31.2 | 21 Aug 23 10:36 UTC | 21 Aug 23 10:36 UTC |
	|         | addons-351207                     |                        |         |         |                     |                     |
	| ssh     | addons-351207 ssh curl -s         | addons-351207          | jenkins | v1.31.2 | 21 Aug 23 10:36 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:       |                        |         |         |                     |                     |
	|         | nginx.example.com'                |                        |         |         |                     |                     |
	| addons  | addons-351207 addons              | addons-351207          | jenkins | v1.31.2 | 21 Aug 23 10:37 UTC | 21 Aug 23 10:37 UTC |
	|         | disable csi-hostpath-driver       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1            |                        |         |         |                     |                     |
	| addons  | addons-351207 addons              | addons-351207          | jenkins | v1.31.2 | 21 Aug 23 10:37 UTC | 21 Aug 23 10:37 UTC |
	|         | disable volumesnapshots           |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1            |                        |         |         |                     |                     |
	| ip      | addons-351207 ip                  | addons-351207          | jenkins | v1.31.2 | 21 Aug 23 10:38 UTC | 21 Aug 23 10:38 UTC |
	| addons  | addons-351207 addons disable      | addons-351207          | jenkins | v1.31.2 | 21 Aug 23 10:38 UTC | 21 Aug 23 10:38 UTC |
	|         | ingress-dns --alsologtostderr     |                        |         |         |                     |                     |
	|         | -v=1                              |                        |         |         |                     |                     |
	| addons  | addons-351207 addons disable      | addons-351207          | jenkins | v1.31.2 | 21 Aug 23 10:38 UTC | 21 Aug 23 10:38 UTC |
	|         | ingress --alsologtostderr -v=1    |                        |         |         |                     |                     |
	|---------|-----------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/21 10:33:48
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0821 10:33:48.705967   13501 out.go:296] Setting OutFile to fd 1 ...
	I0821 10:33:48.706081   13501 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 10:33:48.706091   13501 out.go:309] Setting ErrFile to fd 2...
	I0821 10:33:48.706097   13501 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 10:33:48.706291   13501 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-5717/.minikube/bin
	I0821 10:33:48.706891   13501 out.go:303] Setting JSON to false
	I0821 10:33:48.707677   13501 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":979,"bootTime":1692613050,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0821 10:33:48.707729   13501 start.go:138] virtualization: kvm guest
	I0821 10:33:48.710060   13501 out.go:177] * [addons-351207] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0821 10:33:48.711516   13501 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 10:33:48.712982   13501 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 10:33:48.711553   13501 notify.go:220] Checking for updates...
	I0821 10:33:48.715617   13501 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17102-5717/kubeconfig
	I0821 10:33:48.716940   13501 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-5717/.minikube
	I0821 10:33:48.718228   13501 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0821 10:33:48.720584   13501 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 10:33:48.721947   13501 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 10:33:48.741572   13501 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0821 10:33:48.741680   13501 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 10:33:48.789627   13501 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-08-21 10:33:48.781087182 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0821 10:33:48.789722   13501 docker.go:294] overlay module found
	I0821 10:33:48.791731   13501 out.go:177] * Using the docker driver based on user configuration
	I0821 10:33:48.793141   13501 start.go:298] selected driver: docker
	I0821 10:33:48.793152   13501 start.go:902] validating driver "docker" against <nil>
	I0821 10:33:48.793163   13501 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 10:33:48.793944   13501 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 10:33:48.848398   13501 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-08-21 10:33:48.840342911 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0821 10:33:48.848544   13501 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0821 10:33:48.848733   13501 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0821 10:33:48.850671   13501 out.go:177] * Using Docker driver with root privileges
	I0821 10:33:48.852251   13501 cni.go:84] Creating CNI manager for ""
	I0821 10:33:48.852264   13501 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0821 10:33:48.852274   13501 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0821 10:33:48.852281   13501 start_flags.go:319] config:
	{Name:addons-351207 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:addons-351207 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cn
i FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 10:33:48.853756   13501 out.go:177] * Starting control plane node addons-351207 in cluster addons-351207
	I0821 10:33:48.855008   13501 cache.go:122] Beginning downloading kic base image for docker with crio
	I0821 10:33:48.856382   13501 out.go:177] * Pulling base image ...
	I0821 10:33:48.857587   13501 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0821 10:33:48.857620   13501 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17102-5717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4
	I0821 10:33:48.857628   13501 cache.go:57] Caching tarball of preloaded images
	I0821 10:33:48.857686   13501 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0821 10:33:48.857708   13501 preload.go:174] Found /home/jenkins/minikube-integration/17102-5717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0821 10:33:48.857717   13501 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on crio
	I0821 10:33:48.858029   13501 profile.go:148] Saving config to /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/config.json ...
	I0821 10:33:48.858058   13501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/config.json: {Name:mkd52f02b38adf36311cfbdcd7442fb8841f995d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 10:33:48.872541   13501 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0821 10:33:48.872647   13501 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0821 10:33:48.872662   13501 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory, skipping pull
	I0821 10:33:48.872666   13501 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in cache, skipping pull
	I0821 10:33:48.872679   13501 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	I0821 10:33:48.872689   13501 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 from local cache
	I0821 10:33:59.672336   13501 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 from cached tarball
	I0821 10:33:59.672368   13501 cache.go:195] Successfully downloaded all kic artifacts
	I0821 10:33:59.672407   13501 start.go:365] acquiring machines lock for addons-351207: {Name:mk097d89e3c893cd856bcb67e5fd12b596cc101e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 10:33:59.672489   13501 start.go:369] acquired machines lock for "addons-351207" in 64.496µs
	I0821 10:33:59.672516   13501 start.go:93] Provisioning new machine with config: &{Name:addons-351207 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:addons-351207 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0821 10:33:59.672589   13501 start.go:125] createHost starting for "" (driver="docker")
	I0821 10:33:59.674256   13501 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0821 10:33:59.674461   13501 start.go:159] libmachine.API.Create for "addons-351207" (driver="docker")
	I0821 10:33:59.674491   13501 client.go:168] LocalClient.Create starting
	I0821 10:33:59.674581   13501 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem
	I0821 10:33:59.936234   13501 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/cert.pem
	I0821 10:34:00.059521   13501 cli_runner.go:164] Run: docker network inspect addons-351207 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0821 10:34:00.074668   13501 cli_runner.go:211] docker network inspect addons-351207 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0821 10:34:00.074723   13501 network_create.go:281] running [docker network inspect addons-351207] to gather additional debugging logs...
	I0821 10:34:00.074739   13501 cli_runner.go:164] Run: docker network inspect addons-351207
	W0821 10:34:00.089430   13501 cli_runner.go:211] docker network inspect addons-351207 returned with exit code 1
	I0821 10:34:00.089455   13501 network_create.go:284] error running [docker network inspect addons-351207]: docker network inspect addons-351207: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-351207 not found
	I0821 10:34:00.089466   13501 network_create.go:286] output of [docker network inspect addons-351207]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-351207 not found
	
	** /stderr **
	I0821 10:34:00.089517   13501 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0821 10:34:00.104197   13501 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014b6000}
	I0821 10:34:00.104242   13501 network_create.go:123] attempt to create docker network addons-351207 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0821 10:34:00.104289   13501 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-351207 addons-351207
	I0821 10:34:00.154180   13501 network_create.go:107] docker network addons-351207 192.168.49.0/24 created
	I0821 10:34:00.154205   13501 kic.go:117] calculated static IP "192.168.49.2" for the "addons-351207" container
	I0821 10:34:00.154251   13501 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0821 10:34:00.169016   13501 cli_runner.go:164] Run: docker volume create addons-351207 --label name.minikube.sigs.k8s.io=addons-351207 --label created_by.minikube.sigs.k8s.io=true
	I0821 10:34:00.184497   13501 oci.go:103] Successfully created a docker volume addons-351207
	I0821 10:34:00.184575   13501 cli_runner.go:164] Run: docker run --rm --name addons-351207-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-351207 --entrypoint /usr/bin/test -v addons-351207:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0821 10:34:06.210461   13501 cli_runner.go:217] Completed: docker run --rm --name addons-351207-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-351207 --entrypoint /usr/bin/test -v addons-351207:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib: (6.025849355s)
	I0821 10:34:06.210486   13501 oci.go:107] Successfully prepared a docker volume addons-351207
	I0821 10:34:06.210496   13501 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0821 10:34:06.210513   13501 kic.go:190] Starting extracting preloaded images to volume ...
	I0821 10:34:06.210564   13501 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17102-5717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-351207:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0821 10:34:11.115721   13501 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17102-5717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-351207:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.905106151s)
	I0821 10:34:11.115753   13501 kic.go:199] duration metric: took 4.905238 seconds to extract preloaded images to volume
	W0821 10:34:11.115879   13501 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0821 10:34:11.115965   13501 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0821 10:34:11.164639   13501 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-351207 --name addons-351207 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-351207 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-351207 --network addons-351207 --ip 192.168.49.2 --volume addons-351207:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0821 10:34:11.476299   13501 cli_runner.go:164] Run: docker container inspect addons-351207 --format={{.State.Running}}
	I0821 10:34:11.493410   13501 cli_runner.go:164] Run: docker container inspect addons-351207 --format={{.State.Status}}
	I0821 10:34:11.510594   13501 cli_runner.go:164] Run: docker exec addons-351207 stat /var/lib/dpkg/alternatives/iptables
	I0821 10:34:11.570325   13501 oci.go:144] the created container "addons-351207" has a running status.
	I0821 10:34:11.570360   13501 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17102-5717/.minikube/machines/addons-351207/id_rsa...
	I0821 10:34:11.837591   13501 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17102-5717/.minikube/machines/addons-351207/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0821 10:34:11.868683   13501 cli_runner.go:164] Run: docker container inspect addons-351207 --format={{.State.Status}}
	I0821 10:34:11.889241   13501 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0821 10:34:11.889267   13501 kic_runner.go:114] Args: [docker exec --privileged addons-351207 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0821 10:34:11.952993   13501 cli_runner.go:164] Run: docker container inspect addons-351207 --format={{.State.Status}}
	I0821 10:34:11.971253   13501 machine.go:88] provisioning docker machine ...
	I0821 10:34:11.971290   13501 ubuntu.go:169] provisioning hostname "addons-351207"
	I0821 10:34:11.971345   13501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351207
	I0821 10:34:11.993983   13501 main.go:141] libmachine: Using SSH client type: native
	I0821 10:34:11.994619   13501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0821 10:34:11.994651   13501 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-351207 && echo "addons-351207" | sudo tee /etc/hostname
	I0821 10:34:12.157612   13501 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-351207
	
	I0821 10:34:12.157678   13501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351207
	I0821 10:34:12.175819   13501 main.go:141] libmachine: Using SSH client type: native
	I0821 10:34:12.176412   13501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0821 10:34:12.176440   13501 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-351207' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-351207/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-351207' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0821 10:34:12.298874   13501 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0821 10:34:12.298902   13501 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17102-5717/.minikube CaCertPath:/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17102-5717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17102-5717/.minikube}
	I0821 10:34:12.298947   13501 ubuntu.go:177] setting up certificates
	I0821 10:34:12.298955   13501 provision.go:83] configureAuth start
	I0821 10:34:12.299000   13501 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-351207
	I0821 10:34:12.313857   13501 provision.go:138] copyHostCerts
	I0821 10:34:12.313926   13501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17102-5717/.minikube/ca.pem (1078 bytes)
	I0821 10:34:12.314041   13501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17102-5717/.minikube/cert.pem (1123 bytes)
	I0821 10:34:12.314097   13501 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17102-5717/.minikube/key.pem (1675 bytes)
	I0821 10:34:12.314141   13501 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17102-5717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca-key.pem org=jenkins.addons-351207 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-351207]
	I0821 10:34:12.387221   13501 provision.go:172] copyRemoteCerts
	I0821 10:34:12.387280   13501 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0821 10:34:12.387312   13501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351207
	I0821 10:34:12.402611   13501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/addons-351207/id_rsa Username:docker}
	I0821 10:34:12.495256   13501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0821 10:34:12.516033   13501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0821 10:34:12.536089   13501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0821 10:34:12.555670   13501 provision.go:86] duration metric: configureAuth took 256.701472ms
	I0821 10:34:12.555698   13501 ubuntu.go:193] setting minikube options for container-runtime
	I0821 10:34:12.555877   13501 config.go:182] Loaded profile config "addons-351207": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0821 10:34:12.555982   13501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351207
	I0821 10:34:12.571638   13501 main.go:141] libmachine: Using SSH client type: native
	I0821 10:34:12.572030   13501 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0821 10:34:12.572046   13501 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0821 10:34:12.776152   13501 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0821 10:34:12.776175   13501 machine.go:91] provisioned docker machine in 804.898605ms
	I0821 10:34:12.776183   13501 client.go:171] LocalClient.Create took 13.101684323s
	I0821 10:34:12.776199   13501 start.go:167] duration metric: libmachine.API.Create for "addons-351207" took 13.101739376s
	I0821 10:34:12.776205   13501 start.go:300] post-start starting for "addons-351207" (driver="docker")
	I0821 10:34:12.776215   13501 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0821 10:34:12.776286   13501 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0821 10:34:12.776332   13501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351207
	I0821 10:34:12.791546   13501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/addons-351207/id_rsa Username:docker}
	I0821 10:34:12.879531   13501 ssh_runner.go:195] Run: cat /etc/os-release
	I0821 10:34:12.882237   13501 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0821 10:34:12.882272   13501 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0821 10:34:12.882289   13501 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0821 10:34:12.882295   13501 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0821 10:34:12.882303   13501 filesync.go:126] Scanning /home/jenkins/minikube-integration/17102-5717/.minikube/addons for local assets ...
	I0821 10:34:12.882351   13501 filesync.go:126] Scanning /home/jenkins/minikube-integration/17102-5717/.minikube/files for local assets ...
	I0821 10:34:12.882373   13501 start.go:303] post-start completed in 106.163888ms
	I0821 10:34:12.882625   13501 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-351207
	I0821 10:34:12.900442   13501 profile.go:148] Saving config to /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/config.json ...
	I0821 10:34:12.900693   13501 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0821 10:34:12.900744   13501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351207
	I0821 10:34:12.916793   13501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/addons-351207/id_rsa Username:docker}
	I0821 10:34:13.003681   13501 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0821 10:34:13.007439   13501 start.go:128] duration metric: createHost completed in 13.334832781s
	I0821 10:34:13.007462   13501 start.go:83] releasing machines lock for "addons-351207", held for 13.33496144s
	I0821 10:34:13.007533   13501 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-351207
	I0821 10:34:13.023279   13501 ssh_runner.go:195] Run: cat /version.json
	I0821 10:34:13.023323   13501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351207
	I0821 10:34:13.023389   13501 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0821 10:34:13.023454   13501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351207
	I0821 10:34:13.040842   13501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/addons-351207/id_rsa Username:docker}
	I0821 10:34:13.041193   13501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/addons-351207/id_rsa Username:docker}
	I0821 10:34:13.223402   13501 ssh_runner.go:195] Run: systemctl --version
	I0821 10:34:13.227171   13501 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0821 10:34:13.360547   13501 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0821 10:34:13.364495   13501 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0821 10:34:13.380847   13501 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0821 10:34:13.380925   13501 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0821 10:34:13.405581   13501 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0821 10:34:13.405610   13501 start.go:466] detecting cgroup driver to use...
	I0821 10:34:13.405637   13501 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0821 10:34:13.405681   13501 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0821 10:34:13.418672   13501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0821 10:34:13.428241   13501 docker.go:196] disabling cri-docker service (if available) ...
	I0821 10:34:13.428288   13501 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0821 10:34:13.439860   13501 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0821 10:34:13.451827   13501 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0821 10:34:13.522229   13501 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0821 10:34:13.598368   13501 docker.go:212] disabling docker service ...
	I0821 10:34:13.598431   13501 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0821 10:34:13.614975   13501 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0821 10:34:13.624750   13501 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0821 10:34:13.697999   13501 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0821 10:34:13.769780   13501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0821 10:34:13.779203   13501 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0821 10:34:13.792516   13501 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0821 10:34:13.792578   13501 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 10:34:13.800367   13501 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0821 10:34:13.800428   13501 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 10:34:13.808443   13501 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 10:34:13.816530   13501 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 10:34:13.824438   13501 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0821 10:34:13.831990   13501 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0821 10:34:13.838849   13501 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0821 10:34:13.845456   13501 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 10:34:13.913099   13501 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0821 10:34:14.017233   13501 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0821 10:34:14.017294   13501 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0821 10:34:14.020310   13501 start.go:534] Will wait 60s for crictl version
	I0821 10:34:14.020352   13501 ssh_runner.go:195] Run: which crictl
	I0821 10:34:14.023056   13501 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0821 10:34:14.052687   13501 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0821 10:34:14.052795   13501 ssh_runner.go:195] Run: crio --version
	I0821 10:34:14.085112   13501 ssh_runner.go:195] Run: crio --version
	I0821 10:34:14.119303   13501 out.go:177] * Preparing Kubernetes v1.27.4 on CRI-O 1.24.6 ...
	I0821 10:34:14.120795   13501 cli_runner.go:164] Run: docker network inspect addons-351207 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0821 10:34:14.135883   13501 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0821 10:34:14.139047   13501 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0821 10:34:14.148451   13501 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0821 10:34:14.148504   13501 ssh_runner.go:195] Run: sudo crictl images --output json
	I0821 10:34:14.195788   13501 crio.go:496] all images are preloaded for cri-o runtime.
	I0821 10:34:14.195809   13501 crio.go:415] Images already preloaded, skipping extraction
	I0821 10:34:14.195846   13501 ssh_runner.go:195] Run: sudo crictl images --output json
	I0821 10:34:14.225379   13501 crio.go:496] all images are preloaded for cri-o runtime.
	I0821 10:34:14.225398   13501 cache_images.go:84] Images are preloaded, skipping loading
	I0821 10:34:14.225474   13501 ssh_runner.go:195] Run: crio config
	I0821 10:34:14.264549   13501 cni.go:84] Creating CNI manager for ""
	I0821 10:34:14.264574   13501 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0821 10:34:14.264594   13501 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0821 10:34:14.264618   13501 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-351207 NodeName:addons-351207 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0821 10:34:14.264785   13501 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-351207"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0821 10:34:14.264870   13501 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-351207 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:addons-351207 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0821 10:34:14.264929   13501 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0821 10:34:14.272471   13501 binaries.go:44] Found k8s binaries, skipping transfer
	I0821 10:34:14.272529   13501 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0821 10:34:14.279736   13501 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0821 10:34:14.294138   13501 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0821 10:34:14.308597   13501 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0821 10:34:14.323203   13501 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0821 10:34:14.325984   13501 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0821 10:34:14.334808   13501 certs.go:56] Setting up /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207 for IP: 192.168.49.2
	I0821 10:34:14.334834   13501 certs.go:190] acquiring lock for shared ca certs: {Name:mkb88db7eb1befc1f1b3279575458c71b2313cc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 10:34:14.334949   13501 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17102-5717/.minikube/ca.key
	I0821 10:34:14.521484   13501 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17102-5717/.minikube/ca.crt ...
	I0821 10:34:14.521511   13501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-5717/.minikube/ca.crt: {Name:mkdf36bc81bf041c7029cc3b68af5687fac42021 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 10:34:14.521698   13501 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17102-5717/.minikube/ca.key ...
	I0821 10:34:14.521714   13501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-5717/.minikube/ca.key: {Name:mk204d20ad014173008a5035af4a9d8039695716 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 10:34:14.521810   13501 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17102-5717/.minikube/proxy-client-ca.key
	I0821 10:34:14.661377   13501 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17102-5717/.minikube/proxy-client-ca.crt ...
	I0821 10:34:14.661410   13501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-5717/.minikube/proxy-client-ca.crt: {Name:mkc80f87a243f6b765d428cd9016566ab2a66c59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 10:34:14.661625   13501 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17102-5717/.minikube/proxy-client-ca.key ...
	I0821 10:34:14.661641   13501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-5717/.minikube/proxy-client-ca.key: {Name:mk07392fec539a478c6a1f4f0b1327012638651f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 10:34:14.661797   13501 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/client.key
	I0821 10:34:14.661818   13501 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/client.crt with IP's: []
	I0821 10:34:14.808153   13501 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/client.crt ...
	I0821 10:34:14.808184   13501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/client.crt: {Name:mkb10e7ff4feeecda1dedd7049820dcfba53d00d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 10:34:14.808370   13501 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/client.key ...
	I0821 10:34:14.808387   13501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/client.key: {Name:mkaea8902854cce69c0b7f226db8a59a59632bd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 10:34:14.808489   13501 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/apiserver.key.dd3b5fb2
	I0821 10:34:14.808513   13501 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0821 10:34:14.919930   13501 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/apiserver.crt.dd3b5fb2 ...
	I0821 10:34:14.919957   13501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/apiserver.crt.dd3b5fb2: {Name:mk49431d5f0edf6b82aa543e1e310f1ba236de14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 10:34:14.920137   13501 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/apiserver.key.dd3b5fb2 ...
	I0821 10:34:14.920155   13501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/apiserver.key.dd3b5fb2: {Name:mkad056baa1e259630196ad91e9f605679e2180d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 10:34:14.920253   13501 certs.go:337] copying /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/apiserver.crt
	I0821 10:34:14.920339   13501 certs.go:341] copying /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/apiserver.key
	I0821 10:34:14.920397   13501 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/proxy-client.key
	I0821 10:34:14.920420   13501 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/proxy-client.crt with IP's: []
	I0821 10:34:15.090934   13501 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/proxy-client.crt ...
	I0821 10:34:15.090964   13501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/proxy-client.crt: {Name:mk93e41179aa03fc3540bd46d5ec6455421b4c31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 10:34:15.091138   13501 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/proxy-client.key ...
	I0821 10:34:15.091154   13501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/proxy-client.key: {Name:mka14032edf95e7220b8095174f4c07eb58063fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 10:34:15.091384   13501 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca-key.pem (1679 bytes)
	I0821 10:34:15.091428   13501 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem (1078 bytes)
	I0821 10:34:15.091465   13501 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/home/jenkins/minikube-integration/17102-5717/.minikube/certs/cert.pem (1123 bytes)
	I0821 10:34:15.091505   13501 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/home/jenkins/minikube-integration/17102-5717/.minikube/certs/key.pem (1675 bytes)
	I0821 10:34:15.092083   13501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0821 10:34:15.113126   13501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0821 10:34:15.133240   13501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0821 10:34:15.153148   13501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0821 10:34:15.173349   13501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0821 10:34:15.193722   13501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0821 10:34:15.214116   13501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0821 10:34:15.234283   13501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0821 10:34:15.255190   13501 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0821 10:34:15.275414   13501 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0821 10:34:15.290168   13501 ssh_runner.go:195] Run: openssl version
	I0821 10:34:15.294849   13501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0821 10:34:15.302847   13501 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0821 10:34:15.305840   13501 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 21 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0821 10:34:15.305895   13501 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0821 10:34:15.312010   13501 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0821 10:34:15.320019   13501 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0821 10:34:15.322819   13501 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0821 10:34:15.322879   13501 kubeadm.go:404] StartCluster: {Name:addons-351207 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:addons-351207 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 10:34:15.322956   13501 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0821 10:34:15.323009   13501 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0821 10:34:15.354169   13501 cri.go:89] found id: ""
	I0821 10:34:15.354242   13501 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0821 10:34:15.361862   13501 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0821 10:34:15.369511   13501 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0821 10:34:15.369578   13501 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0821 10:34:15.377089   13501 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0821 10:34:15.377176   13501 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0821 10:34:15.418329   13501 kubeadm.go:322] [init] Using Kubernetes version: v1.27.4
	I0821 10:34:15.418467   13501 kubeadm.go:322] [preflight] Running pre-flight checks
	I0821 10:34:15.452121   13501 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0821 10:34:15.452223   13501 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1039-gcp
	I0821 10:34:15.452292   13501 kubeadm.go:322] OS: Linux
	I0821 10:34:15.452378   13501 kubeadm.go:322] CGROUPS_CPU: enabled
	I0821 10:34:15.452433   13501 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0821 10:34:15.452488   13501 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0821 10:34:15.452535   13501 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0821 10:34:15.452576   13501 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0821 10:34:15.452642   13501 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0821 10:34:15.452719   13501 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0821 10:34:15.452782   13501 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0821 10:34:15.452858   13501 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0821 10:34:15.510099   13501 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0821 10:34:15.510236   13501 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0821 10:34:15.510386   13501 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0821 10:34:15.687237   13501 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0821 10:34:15.690243   13501 out.go:204]   - Generating certificates and keys ...
	I0821 10:34:15.690341   13501 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0821 10:34:15.690443   13501 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0821 10:34:15.790212   13501 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0821 10:34:15.992702   13501 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0821 10:34:16.335395   13501 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0821 10:34:16.406241   13501 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0821 10:34:16.750860   13501 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0821 10:34:16.750980   13501 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-351207 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0821 10:34:16.894325   13501 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0821 10:34:16.894488   13501 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-351207 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0821 10:34:16.993627   13501 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0821 10:34:17.286773   13501 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0821 10:34:17.410672   13501 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0821 10:34:17.410785   13501 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0821 10:34:17.750405   13501 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0821 10:34:17.868504   13501 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0821 10:34:17.995957   13501 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0821 10:34:18.167824   13501 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0821 10:34:18.175297   13501 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0821 10:34:18.175939   13501 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0821 10:34:18.175998   13501 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0821 10:34:18.245755   13501 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0821 10:34:18.248244   13501 out.go:204]   - Booting up control plane ...
	I0821 10:34:18.248383   13501 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0821 10:34:18.249109   13501 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0821 10:34:18.249986   13501 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0821 10:34:18.250694   13501 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0821 10:34:18.252534   13501 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0821 10:34:23.256237   13501 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.003548 seconds
	I0821 10:34:23.256411   13501 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0821 10:34:23.268865   13501 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0821 10:34:23.788303   13501 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0821 10:34:23.788566   13501 kubeadm.go:322] [mark-control-plane] Marking the node addons-351207 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0821 10:34:24.297520   13501 kubeadm.go:322] [bootstrap-token] Using token: gst4qc.cdf7cm2p7v3v6z1v
	I0821 10:34:24.299041   13501 out.go:204]   - Configuring RBAC rules ...
	I0821 10:34:24.299192   13501 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0821 10:34:24.302289   13501 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0821 10:34:24.307479   13501 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0821 10:34:24.309791   13501 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0821 10:34:24.312153   13501 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0821 10:34:24.315878   13501 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0821 10:34:24.323660   13501 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0821 10:34:24.526228   13501 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0821 10:34:24.706127   13501 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0821 10:34:24.706944   13501 kubeadm.go:322] 
	I0821 10:34:24.707040   13501 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0821 10:34:24.707056   13501 kubeadm.go:322] 
	I0821 10:34:24.707118   13501 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0821 10:34:24.707125   13501 kubeadm.go:322] 
	I0821 10:34:24.707144   13501 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0821 10:34:24.707194   13501 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0821 10:34:24.707234   13501 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0821 10:34:24.707240   13501 kubeadm.go:322] 
	I0821 10:34:24.707286   13501 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0821 10:34:24.707292   13501 kubeadm.go:322] 
	I0821 10:34:24.707328   13501 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0821 10:34:24.707338   13501 kubeadm.go:322] 
	I0821 10:34:24.707404   13501 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0821 10:34:24.707463   13501 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0821 10:34:24.707521   13501 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0821 10:34:24.707527   13501 kubeadm.go:322] 
	I0821 10:34:24.707630   13501 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0821 10:34:24.707735   13501 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0821 10:34:24.707746   13501 kubeadm.go:322] 
	I0821 10:34:24.707827   13501 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token gst4qc.cdf7cm2p7v3v6z1v \
	I0821 10:34:24.707961   13501 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a6ae141b3a3795878aa14999e04688399a9a305fa66151b732d0ee2f32cf9691 \
	I0821 10:34:24.707996   13501 kubeadm.go:322] 	--control-plane 
	I0821 10:34:24.708005   13501 kubeadm.go:322] 
	I0821 10:34:24.708123   13501 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0821 10:34:24.708133   13501 kubeadm.go:322] 
	I0821 10:34:24.708212   13501 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token gst4qc.cdf7cm2p7v3v6z1v \
	I0821 10:34:24.708320   13501 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a6ae141b3a3795878aa14999e04688399a9a305fa66151b732d0ee2f32cf9691 
	I0821 10:34:24.709781   13501 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1039-gcp\n", err: exit status 1
	I0821 10:34:24.709924   13501 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0821 10:34:24.709944   13501 cni.go:84] Creating CNI manager for ""
	I0821 10:34:24.709954   13501 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0821 10:34:24.711727   13501 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0821 10:34:24.713137   13501 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0821 10:34:24.716307   13501 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.27.4/kubectl ...
	I0821 10:34:24.716320   13501 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0821 10:34:24.744969   13501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0821 10:34:25.372772   13501 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0821 10:34:25.372858   13501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:34:25.372901   13501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43 minikube.k8s.io/name=addons-351207 minikube.k8s.io/updated_at=2023_08_21T10_34_25_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:34:25.379272   13501 ops.go:34] apiserver oom_adj: -16
	I0821 10:34:25.453282   13501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:34:25.513669   13501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:34:26.073889   13501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:34:26.573315   13501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:34:27.073882   13501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:34:27.574054   13501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:34:28.073574   13501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:34:28.573434   13501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:34:29.073556   13501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:34:29.574277   13501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:34:30.073379   13501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:34:30.573342   13501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:34:31.074132   13501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:34:31.573374   13501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:34:32.073639   13501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:34:32.573934   13501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:34:33.073916   13501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:34:33.573759   13501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:34:34.074049   13501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:34:34.573566   13501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:34:35.073441   13501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:34:35.574306   13501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:34:36.074334   13501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:34:36.574087   13501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:34:37.073851   13501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:34:37.574307   13501 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:34:37.639003   13501 kubeadm.go:1081] duration metric: took 12.266201088s to wait for elevateKubeSystemPrivileges.
	I0821 10:34:37.639029   13501 kubeadm.go:406] StartCluster complete in 22.316155531s
	I0821 10:34:37.639044   13501 settings.go:142] acquiring lock: {Name:mkafc51d9ee0fb589973b887f0111ccc8fd1075b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 10:34:37.639142   13501 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17102-5717/kubeconfig
	I0821 10:34:37.639500   13501 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-5717/kubeconfig: {Name:mkb50cf560191d5f6ff2b436dd414f0b5471024e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 10:34:37.639681   13501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0821 10:34:37.639754   13501 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0821 10:34:37.639871   13501 config.go:182] Loaded profile config "addons-351207": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0821 10:34:37.639886   13501 addons.go:69] Setting ingress-dns=true in profile "addons-351207"
	I0821 10:34:37.639892   13501 addons.go:69] Setting ingress=true in profile "addons-351207"
	I0821 10:34:37.639914   13501 addons.go:69] Setting registry=true in profile "addons-351207"
	I0821 10:34:37.639930   13501 addons.go:231] Setting addon ingress=true in "addons-351207"
	I0821 10:34:37.639949   13501 addons.go:69] Setting cloud-spanner=true in profile "addons-351207"
	I0821 10:34:37.639948   13501 addons.go:69] Setting default-storageclass=true in profile "addons-351207"
	I0821 10:34:37.639970   13501 addons.go:69] Setting gcp-auth=true in profile "addons-351207"
	I0821 10:34:37.639976   13501 addons.go:69] Setting helm-tiller=true in profile "addons-351207"
	I0821 10:34:37.639989   13501 mustload.go:65] Loading cluster: addons-351207
	I0821 10:34:37.639983   13501 addons.go:69] Setting storage-provisioner=true in profile "addons-351207"
	I0821 10:34:37.639900   13501 addons.go:231] Setting addon ingress-dns=true in "addons-351207"
	I0821 10:34:37.640009   13501 host.go:66] Checking if "addons-351207" exists ...
	I0821 10:34:37.640020   13501 addons.go:231] Setting addon storage-provisioner=true in "addons-351207"
	I0821 10:34:37.639874   13501 addons.go:69] Setting volumesnapshots=true in profile "addons-351207"
	I0821 10:34:37.640053   13501 addons.go:231] Setting addon volumesnapshots=true in "addons-351207"
	I0821 10:34:37.640078   13501 host.go:66] Checking if "addons-351207" exists ...
	I0821 10:34:37.640104   13501 host.go:66] Checking if "addons-351207" exists ...
	I0821 10:34:37.640005   13501 addons.go:231] Setting addon helm-tiller=true in "addons-351207"
	I0821 10:34:37.639978   13501 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-351207"
	I0821 10:34:37.640156   13501 host.go:66] Checking if "addons-351207" exists ...
	I0821 10:34:37.639940   13501 addons.go:69] Setting inspektor-gadget=true in profile "addons-351207"
	I0821 10:34:37.640198   13501 addons.go:231] Setting addon inspektor-gadget=true in "addons-351207"
	I0821 10:34:37.640232   13501 host.go:66] Checking if "addons-351207" exists ...
	I0821 10:34:37.640241   13501 config.go:182] Loaded profile config "addons-351207": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0821 10:34:37.640108   13501 host.go:66] Checking if "addons-351207" exists ...
	I0821 10:34:37.640429   13501 cli_runner.go:164] Run: docker container inspect addons-351207 --format={{.State.Status}}
	I0821 10:34:37.640567   13501 cli_runner.go:164] Run: docker container inspect addons-351207 --format={{.State.Status}}
	I0821 10:34:37.640571   13501 cli_runner.go:164] Run: docker container inspect addons-351207 --format={{.State.Status}}
	I0821 10:34:37.639931   13501 addons.go:69] Setting metrics-server=true in profile "addons-351207"
	I0821 10:34:37.640655   13501 cli_runner.go:164] Run: docker container inspect addons-351207 --format={{.State.Status}}
	I0821 10:34:37.639956   13501 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-351207"
	I0821 10:34:37.640695   13501 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-351207"
	I0821 10:34:37.640715   13501 cli_runner.go:164] Run: docker container inspect addons-351207 --format={{.State.Status}}
	I0821 10:34:37.640568   13501 cli_runner.go:164] Run: docker container inspect addons-351207 --format={{.State.Status}}
	I0821 10:34:37.639958   13501 addons.go:231] Setting addon cloud-spanner=true in "addons-351207"
	I0821 10:34:37.640880   13501 host.go:66] Checking if "addons-351207" exists ...
	I0821 10:34:37.640568   13501 cli_runner.go:164] Run: docker container inspect addons-351207 --format={{.State.Status}}
	I0821 10:34:37.640569   13501 cli_runner.go:164] Run: docker container inspect addons-351207 --format={{.State.Status}}
	I0821 10:34:37.640658   13501 addons.go:231] Setting addon metrics-server=true in "addons-351207"
	I0821 10:34:37.641292   13501 host.go:66] Checking if "addons-351207" exists ...
	I0821 10:34:37.641310   13501 cli_runner.go:164] Run: docker container inspect addons-351207 --format={{.State.Status}}
	I0821 10:34:37.639937   13501 addons.go:231] Setting addon registry=true in "addons-351207"
	I0821 10:34:37.641380   13501 host.go:66] Checking if "addons-351207" exists ...
	I0821 10:34:37.640729   13501 host.go:66] Checking if "addons-351207" exists ...
	I0821 10:34:37.641756   13501 cli_runner.go:164] Run: docker container inspect addons-351207 --format={{.State.Status}}
	I0821 10:34:37.641779   13501 cli_runner.go:164] Run: docker container inspect addons-351207 --format={{.State.Status}}
	I0821 10:34:37.641872   13501 cli_runner.go:164] Run: docker container inspect addons-351207 --format={{.State.Status}}
	I0821 10:34:37.679243   13501 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0821 10:34:37.681569   13501 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.1
	I0821 10:34:37.681524   13501 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0821 10:34:37.685557   13501 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0821 10:34:37.683795   13501 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0821 10:34:37.683813   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0821 10:34:37.685819   13501 host.go:66] Checking if "addons-351207" exists ...
	I0821 10:34:37.688436   13501 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0821 10:34:37.688493   13501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351207
	I0821 10:34:37.689873   13501 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0821 10:34:37.689962   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0821 10:34:37.691581   13501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351207
	I0821 10:34:37.691655   13501 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0821 10:34:37.691669   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0821 10:34:37.691787   13501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351207
	I0821 10:34:37.693109   13501 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.19.0
	I0821 10:34:37.694491   13501 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0821 10:34:37.695983   13501 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0821 10:34:37.695989   13501 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.9
	I0821 10:34:37.696002   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0821 10:34:37.694730   13501 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0821 10:34:37.694740   13501 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0821 10:34:37.696056   13501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351207
	I0821 10:34:37.698256   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0821 10:34:37.698311   13501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351207
	I0821 10:34:37.698473   13501 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0821 10:34:37.698482   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0821 10:34:37.698516   13501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351207
	I0821 10:34:37.700316   13501 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0821 10:34:37.700333   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0821 10:34:37.700379   13501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351207
	I0821 10:34:37.700028   13501 addons.go:231] Setting addon default-storageclass=true in "addons-351207"
	I0821 10:34:37.700498   13501 host.go:66] Checking if "addons-351207" exists ...
	I0821 10:34:37.700972   13501 cli_runner.go:164] Run: docker container inspect addons-351207 --format={{.State.Status}}
	I0821 10:34:37.713359   13501 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0821 10:34:37.715069   13501 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0821 10:34:37.715089   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0821 10:34:37.715156   13501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351207
	I0821 10:34:37.720358   13501 out.go:177]   - Using image docker.io/registry:2.8.1
	I0821 10:34:37.721883   13501 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0821 10:34:37.723624   13501 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0821 10:34:37.723602   13501 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I0821 10:34:37.727990   13501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/addons-351207/id_rsa Username:docker}
	I0821 10:34:37.728084   13501 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-351207" context rescaled to 1 replicas
	I0821 10:34:37.730583   13501 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0821 10:34:37.730609   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0821 10:34:37.730604   13501 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0821 10:34:37.732273   13501 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0821 10:34:37.734440   13501 out.go:177] * Verifying Kubernetes components...
	I0821 10:34:37.736780   13501 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0821 10:34:37.732352   13501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351207
	I0821 10:34:37.734176   13501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/addons-351207/id_rsa Username:docker}
	I0821 10:34:37.734218   13501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/addons-351207/id_rsa Username:docker}
	I0821 10:34:37.740347   13501 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0821 10:34:37.739615   13501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/addons-351207/id_rsa Username:docker}
	I0821 10:34:37.740293   13501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0821 10:34:37.742835   13501 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0821 10:34:37.745342   13501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/addons-351207/id_rsa Username:docker}
	I0821 10:34:37.747747   13501 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0821 10:34:37.755404   13501 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0821 10:34:37.749051   13501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/addons-351207/id_rsa Username:docker}
	I0821 10:34:37.757018   13501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/addons-351207/id_rsa Username:docker}
	I0821 10:34:37.758264   13501 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0821 10:34:37.758280   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0821 10:34:37.759038   13501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351207
	I0821 10:34:37.760844   13501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/addons-351207/id_rsa Username:docker}
	I0821 10:34:37.771092   13501 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0821 10:34:37.771114   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0821 10:34:37.771160   13501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351207
	I0821 10:34:37.772964   13501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/addons-351207/id_rsa Username:docker}
	I0821 10:34:37.790799   13501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/addons-351207/id_rsa Username:docker}
	I0821 10:34:37.792913   13501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/addons-351207/id_rsa Username:docker}
	I0821 10:34:37.854997   13501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0821 10:34:37.856056   13501 node_ready.go:35] waiting up to 6m0s for node "addons-351207" to be "Ready" ...
	I0821 10:34:38.041739   13501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0821 10:34:38.049051   13501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0821 10:34:38.151292   13501 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0821 10:34:38.151401   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0821 10:34:38.156028   13501 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0821 10:34:38.156106   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0821 10:34:38.245704   13501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0821 10:34:38.256883   13501 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I0821 10:34:38.256955   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0821 10:34:38.335993   13501 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0821 10:34:38.336073   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0821 10:34:38.336624   13501 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0821 10:34:38.336678   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0821 10:34:38.336891   13501 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0821 10:34:38.336942   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0821 10:34:38.349215   13501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0821 10:34:38.437549   13501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0821 10:34:38.452356   13501 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0821 10:34:38.452427   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0821 10:34:38.460548   13501 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0821 10:34:38.460571   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0821 10:34:38.547711   13501 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0821 10:34:38.547734   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0821 10:34:38.547843   13501 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0821 10:34:38.547867   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0821 10:34:38.553672   13501 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0821 10:34:38.553693   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0821 10:34:38.553672   13501 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0821 10:34:38.553708   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0821 10:34:38.758781   13501 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0821 10:34:38.758816   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0821 10:34:38.843849   13501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0821 10:34:38.847486   13501 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0821 10:34:38.847509   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0821 10:34:38.847635   13501 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0821 10:34:38.847644   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0821 10:34:38.937014   13501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0821 10:34:39.040002   13501 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0821 10:34:39.040072   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0821 10:34:39.140472   13501 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0821 10:34:39.140543   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0821 10:34:39.150521   13501 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0821 10:34:39.150596   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0821 10:34:39.236378   13501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0821 10:34:39.455448   13501 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0821 10:34:39.455523   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0821 10:34:39.456546   13501 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0821 10:34:39.456566   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0821 10:34:39.636136   13501 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0821 10:34:39.636167   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0821 10:34:39.748868   13501 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0821 10:34:39.748891   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0821 10:34:39.838125   13501 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0821 10:34:39.838148   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0821 10:34:39.952369   13501 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.097332053s)
	I0821 10:34:39.952399   13501 start.go:901] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0821 10:34:40.139328   13501 node_ready.go:58] node "addons-351207" has status "Ready":"False"
	I0821 10:34:40.249254   13501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0821 10:34:40.255665   13501 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0821 10:34:40.255729   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0821 10:34:40.257575   13501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0821 10:34:40.842064   13501 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0821 10:34:40.842141   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0821 10:34:41.150466   13501 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0821 10:34:41.150549   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0821 10:34:41.549278   13501 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0821 10:34:41.549355   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0821 10:34:41.750304   13501 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0821 10:34:41.750381   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0821 10:34:41.848915   13501 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0821 10:34:41.848992   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0821 10:34:42.044812   13501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0821 10:34:42.354093   13501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.312259847s)
	I0821 10:34:42.454169   13501 node_ready.go:58] node "addons-351207" has status "Ready":"False"
	I0821 10:34:43.736964   13501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.687813642s)
	I0821 10:34:43.737006   13501 addons.go:467] Verifying addon ingress=true in "addons-351207"
	I0821 10:34:43.737027   13501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.491130748s)
	I0821 10:34:43.738733   13501 out.go:177] * Verifying ingress addon...
	I0821 10:34:43.737115   13501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.387799584s)
	I0821 10:34:43.737173   13501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.299550136s)
	I0821 10:34:43.737245   13501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.893360328s)
	I0821 10:34:43.737303   13501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (4.800256177s)
	I0821 10:34:43.737373   13501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.500919003s)
	I0821 10:34:43.737452   13501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.488107176s)
	I0821 10:34:43.737578   13501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.479932303s)
	W0821 10:34:43.740454   13501 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0821 10:34:43.740498   13501 retry.go:31] will retry after 373.998291ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0821 10:34:43.740541   13501 addons.go:467] Verifying addon registry=true in "addons-351207"
	I0821 10:34:43.742418   13501 out.go:177] * Verifying registry addon...
	I0821 10:34:43.740946   13501 addons.go:467] Verifying addon metrics-server=true in "addons-351207"
	I0821 10:34:43.741270   13501 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0821 10:34:43.744763   13501 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0821 10:34:43.747718   13501 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0821 10:34:43.747734   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:34:43.748128   13501 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0821 10:34:43.748144   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:34:43.750683   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:34:43.751001   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:34:44.114715   13501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0821 10:34:44.254822   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:34:44.255228   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:34:44.492411   13501 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0821 10:34:44.492518   13501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351207
	I0821 10:34:44.511439   13501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/addons-351207/id_rsa Username:docker}
	I0821 10:34:44.656402   13501 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0821 10:34:44.755117   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:34:44.756412   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:34:44.757288   13501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.71236706s)
	I0821 10:34:44.757327   13501 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-351207"
	I0821 10:34:44.759277   13501 out.go:177] * Verifying csi-hostpath-driver addon...
	I0821 10:34:44.762208   13501 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0821 10:34:44.837046   13501 addons.go:231] Setting addon gcp-auth=true in "addons-351207"
	I0821 10:34:44.837104   13501 host.go:66] Checking if "addons-351207" exists ...
	I0821 10:34:44.837593   13501 cli_runner.go:164] Run: docker container inspect addons-351207 --format={{.State.Status}}
	I0821 10:34:44.837617   13501 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0821 10:34:44.837627   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:34:44.841396   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:34:44.857759   13501 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0821 10:34:44.857819   13501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351207
	I0821 10:34:44.874291   13501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/addons-351207/id_rsa Username:docker}
	I0821 10:34:44.949114   13501 node_ready.go:58] node "addons-351207" has status "Ready":"False"
	I0821 10:34:45.233489   13501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.118726974s)
	I0821 10:34:45.236312   13501 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0821 10:34:45.237748   13501 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0821 10:34:45.239078   13501 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0821 10:34:45.239092   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0821 10:34:45.254274   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:34:45.254548   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:34:45.254934   13501 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0821 10:34:45.254947   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0821 10:34:45.269712   13501 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0821 10:34:45.269727   13501 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0821 10:34:45.284812   13501 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0821 10:34:45.345090   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:34:45.755229   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:34:45.755829   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:34:45.848831   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:34:46.256930   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:34:46.257375   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:34:46.346362   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:34:46.756102   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:34:46.839800   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:34:46.848734   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:34:46.949679   13501 node_ready.go:58] node "addons-351207" has status "Ready":"False"
	I0821 10:34:47.338598   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:34:47.339126   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:34:47.352146   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:34:47.546017   13501 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.261162461s)
	I0821 10:34:47.546855   13501 addons.go:467] Verifying addon gcp-auth=true in "addons-351207"
	I0821 10:34:47.548629   13501 out.go:177] * Verifying gcp-auth addon...
	I0821 10:34:47.550810   13501 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0821 10:34:47.554075   13501 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0821 10:34:47.554091   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:34:47.556619   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:34:47.755013   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:34:47.756939   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:34:47.846485   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:34:48.061871   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:34:48.255711   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:34:48.256136   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:34:48.347062   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:34:48.560936   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:34:48.755527   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:34:48.755945   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:34:48.847145   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:34:49.061603   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:34:49.263012   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:34:49.263603   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:34:49.346872   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:34:49.449363   13501 node_ready.go:58] node "addons-351207" has status "Ready":"False"
	I0821 10:34:49.561455   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:34:49.754808   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:34:49.755926   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:34:49.846008   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:34:50.061139   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:34:50.254950   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:34:50.255129   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:34:50.346592   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:34:50.560637   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:34:50.755719   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:34:50.755996   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:34:50.845585   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:34:51.060395   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:34:51.255584   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:34:51.256029   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:34:51.345429   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:34:51.560011   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:34:51.754946   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:34:51.755283   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:34:51.846260   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:34:51.949899   13501 node_ready.go:58] node "addons-351207" has status "Ready":"False"
	I0821 10:34:52.060841   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:34:52.255526   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:34:52.255791   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:34:52.346029   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:34:52.560190   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:34:52.754568   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:34:52.754811   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:34:52.846589   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:34:53.059740   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:34:53.255088   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:34:53.255321   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:34:53.346279   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:34:53.560462   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:34:53.755182   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:34:53.755213   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:34:53.846106   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:34:54.060868   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:34:54.254873   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:34:54.255143   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:34:54.346696   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:34:54.451368   13501 node_ready.go:58] node "addons-351207" has status "Ready":"False"
	I0821 10:34:54.560222   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:34:54.754684   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:34:54.754685   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:34:54.845621   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:34:55.060540   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:34:55.255218   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:34:55.255455   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:34:55.345444   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:34:55.560733   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:34:55.755120   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:34:55.755165   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:34:55.845760   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:34:56.060154   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:34:56.254622   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:34:56.254781   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:34:56.345477   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:34:56.559789   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:34:56.754429   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:34:56.754688   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:34:56.845136   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:34:56.949653   13501 node_ready.go:58] node "addons-351207" has status "Ready":"False"
	I0821 10:34:57.060155   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:34:57.254353   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:34:57.254603   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:34:57.345952   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:34:57.559949   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:34:57.754495   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:34:57.754516   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:34:57.845874   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:34:58.059922   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:34:58.254076   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:34:58.254352   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:34:58.345403   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:34:58.560593   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:34:58.754223   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:34:58.754424   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:34:58.845522   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:34:59.060224   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:34:59.254680   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:34:59.254990   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:34:59.344878   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:34:59.449115   13501 node_ready.go:58] node "addons-351207" has status "Ready":"False"
	I0821 10:34:59.559968   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:34:59.754076   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:34:59.754293   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:34:59.845497   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:00.059328   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:00.254853   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:00.254945   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:00.345144   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:00.560599   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:00.754594   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:00.754779   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:00.844903   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:01.059783   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:01.254163   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:01.255107   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:01.345683   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:01.559836   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:01.754668   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:01.755006   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:01.845108   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:01.949579   13501 node_ready.go:58] node "addons-351207" has status "Ready":"False"
	I0821 10:35:02.060034   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:02.254420   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:02.254639   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:02.345883   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:02.559483   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:02.756149   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:02.757110   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:02.845578   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:03.059674   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:03.255110   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:03.255392   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:03.345587   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:03.560123   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:03.754894   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:03.755143   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:03.845597   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:04.059491   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:04.254930   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:04.255088   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:04.345473   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:04.448857   13501 node_ready.go:58] node "addons-351207" has status "Ready":"False"
	I0821 10:35:04.560845   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:04.754576   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:04.754988   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:04.845635   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:05.060001   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:05.254614   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:05.254669   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:05.345170   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:05.560242   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:05.754920   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:05.754968   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:05.845666   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:06.059487   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:06.254801   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:06.254997   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:06.345488   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:06.559569   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:06.754839   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:06.754839   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:06.845371   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:06.948701   13501 node_ready.go:58] node "addons-351207" has status "Ready":"False"
	I0821 10:35:07.060258   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:07.255801   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:07.256693   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:07.344972   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:07.559996   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:07.754589   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:07.754715   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:07.846028   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:08.060635   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:08.254549   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:08.254823   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:08.346420   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:08.559729   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:08.754198   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:08.754221   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:08.845551   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:08.949023   13501 node_ready.go:58] node "addons-351207" has status "Ready":"False"
	I0821 10:35:09.059515   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:09.254708   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:09.255142   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:09.345060   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:09.560241   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:09.754536   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:09.754900   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:09.846551   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:10.061405   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:10.254687   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:10.254888   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:10.345053   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:10.559948   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:10.754155   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:10.754443   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:10.845703   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:10.949094   13501 node_ready.go:58] node "addons-351207" has status "Ready":"False"
	I0821 10:35:11.059744   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:11.254902   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:11.255018   13501 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0821 10:35:11.255039   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:11.346081   13501 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0821 10:35:11.346099   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:11.448884   13501 node_ready.go:49] node "addons-351207" has status "Ready":"True"
	I0821 10:35:11.448907   13501 node_ready.go:38] duration metric: took 33.592816734s waiting for node "addons-351207" to be "Ready" ...
	I0821 10:35:11.448917   13501 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0821 10:35:11.457301   13501 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-499vt" in "kube-system" namespace to be "Ready" ...
	I0821 10:35:11.560391   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:11.755531   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:11.755611   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:11.847174   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:11.971701   13501 pod_ready.go:92] pod "coredns-5d78c9869d-499vt" in "kube-system" namespace has status "Ready":"True"
	I0821 10:35:11.971724   13501 pod_ready.go:81] duration metric: took 514.400535ms waiting for pod "coredns-5d78c9869d-499vt" in "kube-system" namespace to be "Ready" ...
	I0821 10:35:11.971746   13501 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-351207" in "kube-system" namespace to be "Ready" ...
	I0821 10:35:11.976668   13501 pod_ready.go:92] pod "etcd-addons-351207" in "kube-system" namespace has status "Ready":"True"
	I0821 10:35:11.976697   13501 pod_ready.go:81] duration metric: took 4.944558ms waiting for pod "etcd-addons-351207" in "kube-system" namespace to be "Ready" ...
	I0821 10:35:11.976715   13501 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-351207" in "kube-system" namespace to be "Ready" ...
	I0821 10:35:11.981316   13501 pod_ready.go:92] pod "kube-apiserver-addons-351207" in "kube-system" namespace has status "Ready":"True"
	I0821 10:35:11.981336   13501 pod_ready.go:81] duration metric: took 4.613664ms waiting for pod "kube-apiserver-addons-351207" in "kube-system" namespace to be "Ready" ...
	I0821 10:35:11.981349   13501 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-351207" in "kube-system" namespace to be "Ready" ...
	I0821 10:35:12.059697   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:12.249036   13501 pod_ready.go:92] pod "kube-controller-manager-addons-351207" in "kube-system" namespace has status "Ready":"True"
	I0821 10:35:12.249057   13501 pod_ready.go:81] duration metric: took 267.70141ms waiting for pod "kube-controller-manager-addons-351207" in "kube-system" namespace to be "Ready" ...
	I0821 10:35:12.249069   13501 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w4j8s" in "kube-system" namespace to be "Ready" ...
	I0821 10:35:12.255089   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:12.255104   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:12.346186   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:12.559561   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:12.650080   13501 pod_ready.go:92] pod "kube-proxy-w4j8s" in "kube-system" namespace has status "Ready":"True"
	I0821 10:35:12.650099   13501 pod_ready.go:81] duration metric: took 401.02537ms waiting for pod "kube-proxy-w4j8s" in "kube-system" namespace to be "Ready" ...
	I0821 10:35:12.650109   13501 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-351207" in "kube-system" namespace to be "Ready" ...
	I0821 10:35:12.754962   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:12.755086   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:12.846106   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:13.049447   13501 pod_ready.go:92] pod "kube-scheduler-addons-351207" in "kube-system" namespace has status "Ready":"True"
	I0821 10:35:13.049472   13501 pod_ready.go:81] duration metric: took 399.357269ms waiting for pod "kube-scheduler-addons-351207" in "kube-system" namespace to be "Ready" ...
	I0821 10:35:13.049481   13501 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7746886d4f-xl26c" in "kube-system" namespace to be "Ready" ...
	I0821 10:35:13.060573   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:13.255909   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:13.256180   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:13.346898   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:13.561265   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:13.757285   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:13.757529   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:13.846980   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:14.060284   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:14.255709   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:14.256464   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:14.347338   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:14.560381   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:14.755409   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:14.755604   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:14.846840   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:15.060407   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:15.256273   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:15.256348   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:15.346902   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:15.355863   13501 pod_ready.go:102] pod "metrics-server-7746886d4f-xl26c" in "kube-system" namespace has status "Ready":"False"
	I0821 10:35:15.561250   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:15.755334   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:15.755435   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:15.847761   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:16.060138   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:16.255190   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:16.255307   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:16.346892   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:16.560511   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:16.757209   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:16.758658   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:16.848361   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:17.061147   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:17.254864   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:17.256156   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:17.347081   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:17.560467   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:17.755817   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:17.756626   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:17.847237   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:17.854427   13501 pod_ready.go:102] pod "metrics-server-7746886d4f-xl26c" in "kube-system" namespace has status "Ready":"False"
	I0821 10:35:18.062586   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:18.255406   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:18.255661   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:18.346955   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:18.560259   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:18.755721   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:18.756113   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:18.848545   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:19.060285   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:19.254869   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:19.255117   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:19.346842   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:19.559870   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:19.756174   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:19.756227   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:19.847142   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:19.855524   13501 pod_ready.go:102] pod "metrics-server-7746886d4f-xl26c" in "kube-system" namespace has status "Ready":"False"
	I0821 10:35:20.060730   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:20.257160   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:20.257854   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:20.347870   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:20.560299   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:20.754974   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:20.755241   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:20.846247   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:21.060921   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:21.255744   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:21.255774   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:21.347130   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:21.559850   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:21.756652   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:21.757533   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:21.846026   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:22.060052   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:22.254767   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:22.255631   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:22.346929   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:22.354419   13501 pod_ready.go:102] pod "metrics-server-7746886d4f-xl26c" in "kube-system" namespace has status "Ready":"False"
	I0821 10:35:22.560625   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:22.755883   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:22.755918   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:22.847660   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:23.060003   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:23.254812   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:23.255036   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:23.346883   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:23.560314   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:23.755460   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:23.755695   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:23.846816   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:24.060706   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:24.255848   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:24.255898   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:24.348080   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:24.355243   13501 pod_ready.go:102] pod "metrics-server-7746886d4f-xl26c" in "kube-system" namespace has status "Ready":"False"
	I0821 10:35:24.560831   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:24.757439   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:24.758260   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:24.847270   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:25.059800   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:25.255616   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:25.255814   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:25.346850   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:25.560873   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:25.756054   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:25.756207   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:25.847143   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:26.060894   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:26.263449   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:26.263673   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:26.394833   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:26.413824   13501 pod_ready.go:102] pod "metrics-server-7746886d4f-xl26c" in "kube-system" namespace has status "Ready":"False"
	I0821 10:35:26.616462   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:26.755037   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:26.755141   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:26.846531   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:27.060490   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:27.256032   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:27.256816   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:27.347786   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:27.560462   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:27.755441   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 10:35:27.755799   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:27.846787   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:28.060894   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:28.255642   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:28.255754   13501 kapi.go:107] duration metric: took 44.510990374s to wait for kubernetes.io/minikube-addons=registry ...
	I0821 10:35:28.347846   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:28.559445   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:28.754975   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:28.846702   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:28.854755   13501 pod_ready.go:102] pod "metrics-server-7746886d4f-xl26c" in "kube-system" namespace has status "Ready":"False"
	I0821 10:35:29.059472   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:29.255224   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:29.346496   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:29.560283   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:29.755568   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:29.847814   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:30.060075   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:30.255274   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:30.346336   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:30.560576   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:30.754904   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:30.847747   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:30.857069   13501 pod_ready.go:102] pod "metrics-server-7746886d4f-xl26c" in "kube-system" namespace has status "Ready":"False"
	I0821 10:35:31.062435   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:31.254902   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:31.347977   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:31.560328   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:31.755381   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:31.848014   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:32.060542   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:32.255576   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:32.346543   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:32.560684   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:32.755305   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:32.847284   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:33.060520   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:33.255028   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:33.347492   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:33.354176   13501 pod_ready.go:102] pod "metrics-server-7746886d4f-xl26c" in "kube-system" namespace has status "Ready":"False"
	I0821 10:35:33.560170   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:33.756827   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:33.847298   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:34.059910   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:34.255609   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:34.346486   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:34.559578   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:34.755248   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:34.846645   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:35.059848   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:35.255655   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:35.347588   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:35.355534   13501 pod_ready.go:102] pod "metrics-server-7746886d4f-xl26c" in "kube-system" namespace has status "Ready":"False"
	I0821 10:35:35.560751   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:35.756841   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:35.848841   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:36.143573   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:36.255635   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:36.347493   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:36.560488   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:36.756378   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:36.847557   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:37.062487   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:37.260264   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:37.347851   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:37.355896   13501 pod_ready.go:102] pod "metrics-server-7746886d4f-xl26c" in "kube-system" namespace has status "Ready":"False"
	I0821 10:35:37.561488   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:37.755645   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:37.847179   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:38.061456   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:38.255640   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:38.346849   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:38.560334   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:38.755691   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:38.846920   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:39.060939   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:39.254668   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:39.346928   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:39.560537   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:39.757976   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:39.846454   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:39.854527   13501 pod_ready.go:102] pod "metrics-server-7746886d4f-xl26c" in "kube-system" namespace has status "Ready":"False"
	I0821 10:35:40.060351   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:40.255537   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:40.346414   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:40.560080   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:40.754771   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:40.846966   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:41.060305   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:41.255464   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:41.346963   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:41.560387   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:41.757414   13501 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 10:35:41.846156   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:41.854755   13501 pod_ready.go:92] pod "metrics-server-7746886d4f-xl26c" in "kube-system" namespace has status "Ready":"True"
	I0821 10:35:41.854781   13501 pod_ready.go:81] duration metric: took 28.805292997s waiting for pod "metrics-server-7746886d4f-xl26c" in "kube-system" namespace to be "Ready" ...
	I0821 10:35:41.854806   13501 pod_ready.go:38] duration metric: took 30.405875328s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0821 10:35:41.854824   13501 api_server.go:52] waiting for apiserver process to appear ...
	I0821 10:35:41.854854   13501 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0821 10:35:41.854916   13501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0821 10:35:41.888860   13501 cri.go:89] found id: "29e2b1aebe3357e968461e20604ef50db93ec8e9d61ee16a64a1241b15eff62b"
	I0821 10:35:41.888883   13501 cri.go:89] found id: ""
	I0821 10:35:41.888892   13501 logs.go:284] 1 containers: [29e2b1aebe3357e968461e20604ef50db93ec8e9d61ee16a64a1241b15eff62b]
	I0821 10:35:41.888943   13501 ssh_runner.go:195] Run: which crictl
	I0821 10:35:41.892663   13501 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0821 10:35:41.892718   13501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0821 10:35:41.957184   13501 cri.go:89] found id: "edc8ee70085051c43f733610fec1879fc253e9164445206acac2b1eb4d920335"
	I0821 10:35:41.957205   13501 cri.go:89] found id: ""
	I0821 10:35:41.957214   13501 logs.go:284] 1 containers: [edc8ee70085051c43f733610fec1879fc253e9164445206acac2b1eb4d920335]
	I0821 10:35:41.957266   13501 ssh_runner.go:195] Run: which crictl
	I0821 10:35:41.960787   13501 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0821 10:35:41.960850   13501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0821 10:35:41.994144   13501 cri.go:89] found id: "f37f240cb72aa82529c30c747349e4a232d1b88f8e05e04d9aab08d5d42688a9"
	I0821 10:35:41.994167   13501 cri.go:89] found id: ""
	I0821 10:35:41.994178   13501 logs.go:284] 1 containers: [f37f240cb72aa82529c30c747349e4a232d1b88f8e05e04d9aab08d5d42688a9]
	I0821 10:35:41.994226   13501 ssh_runner.go:195] Run: which crictl
	I0821 10:35:41.997665   13501 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0821 10:35:41.997737   13501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0821 10:35:42.060543   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:42.069932   13501 cri.go:89] found id: "70f840e974edb482cc3179696075ebd90d0d87e5a1a00a45e0c2843adf85208e"
	I0821 10:35:42.069986   13501 cri.go:89] found id: ""
	I0821 10:35:42.070008   13501 logs.go:284] 1 containers: [70f840e974edb482cc3179696075ebd90d0d87e5a1a00a45e0c2843adf85208e]
	I0821 10:35:42.070062   13501 ssh_runner.go:195] Run: which crictl
	I0821 10:35:42.073420   13501 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0821 10:35:42.073485   13501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0821 10:35:42.165367   13501 cri.go:89] found id: "404f545a04bcd8b91d4fb8aa9720e93c65f2c9389492360af8aff93f478f142e"
	I0821 10:35:42.165394   13501 cri.go:89] found id: ""
	I0821 10:35:42.165404   13501 logs.go:284] 1 containers: [404f545a04bcd8b91d4fb8aa9720e93c65f2c9389492360af8aff93f478f142e]
	I0821 10:35:42.165454   13501 ssh_runner.go:195] Run: which crictl
	I0821 10:35:42.168716   13501 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0821 10:35:42.168778   13501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0821 10:35:42.256098   13501 kapi.go:107] duration metric: took 58.514824906s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0821 10:35:42.258956   13501 cri.go:89] found id: "1a31c61730b75649d11c090562b5c92f4f857612230fa9e0fae1e310661cb46b"
	I0821 10:35:42.258975   13501 cri.go:89] found id: ""
	I0821 10:35:42.258983   13501 logs.go:284] 1 containers: [1a31c61730b75649d11c090562b5c92f4f857612230fa9e0fae1e310661cb46b]
	I0821 10:35:42.259028   13501 ssh_runner.go:195] Run: which crictl
	I0821 10:35:42.262989   13501 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0821 10:35:42.263044   13501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0821 10:35:42.341627   13501 cri.go:89] found id: "a8b0b7e51bcbe1ee73bf9fee37e07b046fd6abd7323ede20b265b363e64ddcfa"
	I0821 10:35:42.341648   13501 cri.go:89] found id: ""
	I0821 10:35:42.341655   13501 logs.go:284] 1 containers: [a8b0b7e51bcbe1ee73bf9fee37e07b046fd6abd7323ede20b265b363e64ddcfa]
	I0821 10:35:42.341693   13501 ssh_runner.go:195] Run: which crictl
	I0821 10:35:42.345333   13501 logs.go:123] Gathering logs for kube-apiserver [29e2b1aebe3357e968461e20604ef50db93ec8e9d61ee16a64a1241b15eff62b] ...
	I0821 10:35:42.345362   13501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29e2b1aebe3357e968461e20604ef50db93ec8e9d61ee16a64a1241b15eff62b"
	I0821 10:35:42.346634   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:42.399684   13501 logs.go:123] Gathering logs for coredns [f37f240cb72aa82529c30c747349e4a232d1b88f8e05e04d9aab08d5d42688a9] ...
	I0821 10:35:42.399718   13501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f37f240cb72aa82529c30c747349e4a232d1b88f8e05e04d9aab08d5d42688a9"
	I0821 10:35:42.475030   13501 logs.go:123] Gathering logs for kindnet [a8b0b7e51bcbe1ee73bf9fee37e07b046fd6abd7323ede20b265b363e64ddcfa] ...
	I0821 10:35:42.475063   13501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8b0b7e51bcbe1ee73bf9fee37e07b046fd6abd7323ede20b265b363e64ddcfa"
	I0821 10:35:42.540964   13501 logs.go:123] Gathering logs for dmesg ...
	I0821 10:35:42.540992   13501 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0821 10:35:42.553092   13501 logs.go:123] Gathering logs for describe nodes ...
	I0821 10:35:42.553119   13501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0821 10:35:42.560319   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:42.767427   13501 logs.go:123] Gathering logs for etcd [edc8ee70085051c43f733610fec1879fc253e9164445206acac2b1eb4d920335] ...
	I0821 10:35:42.767456   13501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edc8ee70085051c43f733610fec1879fc253e9164445206acac2b1eb4d920335"
	I0821 10:35:42.942859   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:42.967340   13501 logs.go:123] Gathering logs for kube-scheduler [70f840e974edb482cc3179696075ebd90d0d87e5a1a00a45e0c2843adf85208e] ...
	I0821 10:35:42.967446   13501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 70f840e974edb482cc3179696075ebd90d0d87e5a1a00a45e0c2843adf85208e"
	I0821 10:35:43.059669   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:43.078181   13501 logs.go:123] Gathering logs for kube-proxy [404f545a04bcd8b91d4fb8aa9720e93c65f2c9389492360af8aff93f478f142e] ...
	I0821 10:35:43.078212   13501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 404f545a04bcd8b91d4fb8aa9720e93c65f2c9389492360af8aff93f478f142e"
	I0821 10:35:43.144857   13501 logs.go:123] Gathering logs for kube-controller-manager [1a31c61730b75649d11c090562b5c92f4f857612230fa9e0fae1e310661cb46b] ...
	I0821 10:35:43.144879   13501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a31c61730b75649d11c090562b5c92f4f857612230fa9e0fae1e310661cb46b"
	I0821 10:35:43.203831   13501 logs.go:123] Gathering logs for CRI-O ...
	I0821 10:35:43.203862   13501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0821 10:35:43.321549   13501 logs.go:123] Gathering logs for container status ...
	I0821 10:35:43.321580   13501 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0821 10:35:43.348335   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:43.362530   13501 logs.go:123] Gathering logs for kubelet ...
	I0821 10:35:43.362554   13501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0821 10:35:43.560932   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:43.847470   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:44.060121   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 10:35:44.346618   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:44.560237   13501 kapi.go:107] duration metric: took 57.009424721s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0821 10:35:44.562245   13501 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-351207 cluster.
	I0821 10:35:44.563887   13501 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0821 10:35:44.565201   13501 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0821 10:35:44.848072   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:45.349091   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:45.847322   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:45.937686   13501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0821 10:35:45.952906   13501 api_server.go:72] duration metric: took 1m8.220610688s to wait for apiserver process to appear ...
	I0821 10:35:45.952934   13501 api_server.go:88] waiting for apiserver healthz status ...
	I0821 10:35:45.952968   13501 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0821 10:35:45.953022   13501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0821 10:35:46.045190   13501 cri.go:89] found id: "29e2b1aebe3357e968461e20604ef50db93ec8e9d61ee16a64a1241b15eff62b"
	I0821 10:35:46.045217   13501 cri.go:89] found id: ""
	I0821 10:35:46.045226   13501 logs.go:284] 1 containers: [29e2b1aebe3357e968461e20604ef50db93ec8e9d61ee16a64a1241b15eff62b]
	I0821 10:35:46.045273   13501 ssh_runner.go:195] Run: which crictl
	I0821 10:35:46.048776   13501 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0821 10:35:46.048834   13501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0821 10:35:46.138264   13501 cri.go:89] found id: "edc8ee70085051c43f733610fec1879fc253e9164445206acac2b1eb4d920335"
	I0821 10:35:46.138288   13501 cri.go:89] found id: ""
	I0821 10:35:46.138297   13501 logs.go:284] 1 containers: [edc8ee70085051c43f733610fec1879fc253e9164445206acac2b1eb4d920335]
	I0821 10:35:46.138349   13501 ssh_runner.go:195] Run: which crictl
	I0821 10:35:46.142240   13501 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0821 10:35:46.142302   13501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0821 10:35:46.178119   13501 cri.go:89] found id: "f37f240cb72aa82529c30c747349e4a232d1b88f8e05e04d9aab08d5d42688a9"
	I0821 10:35:46.178141   13501 cri.go:89] found id: ""
	I0821 10:35:46.178150   13501 logs.go:284] 1 containers: [f37f240cb72aa82529c30c747349e4a232d1b88f8e05e04d9aab08d5d42688a9]
	I0821 10:35:46.178197   13501 ssh_runner.go:195] Run: which crictl
	I0821 10:35:46.181541   13501 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0821 10:35:46.181599   13501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0821 10:35:46.257246   13501 cri.go:89] found id: "70f840e974edb482cc3179696075ebd90d0d87e5a1a00a45e0c2843adf85208e"
	I0821 10:35:46.257269   13501 cri.go:89] found id: ""
	I0821 10:35:46.257278   13501 logs.go:284] 1 containers: [70f840e974edb482cc3179696075ebd90d0d87e5a1a00a45e0c2843adf85208e]
	I0821 10:35:46.257322   13501 ssh_runner.go:195] Run: which crictl
	I0821 10:35:46.261512   13501 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0821 10:35:46.261580   13501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0821 10:35:46.338579   13501 cri.go:89] found id: "404f545a04bcd8b91d4fb8aa9720e93c65f2c9389492360af8aff93f478f142e"
	I0821 10:35:46.338603   13501 cri.go:89] found id: ""
	I0821 10:35:46.338611   13501 logs.go:284] 1 containers: [404f545a04bcd8b91d4fb8aa9720e93c65f2c9389492360af8aff93f478f142e]
	I0821 10:35:46.338665   13501 ssh_runner.go:195] Run: which crictl
	I0821 10:35:46.342014   13501 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0821 10:35:46.342088   13501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0821 10:35:46.346782   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:46.376074   13501 cri.go:89] found id: "1a31c61730b75649d11c090562b5c92f4f857612230fa9e0fae1e310661cb46b"
	I0821 10:35:46.376098   13501 cri.go:89] found id: ""
	I0821 10:35:46.376107   13501 logs.go:284] 1 containers: [1a31c61730b75649d11c090562b5c92f4f857612230fa9e0fae1e310661cb46b]
	I0821 10:35:46.376153   13501 ssh_runner.go:195] Run: which crictl
	I0821 10:35:46.379448   13501 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0821 10:35:46.379506   13501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0821 10:35:46.453572   13501 cri.go:89] found id: "a8b0b7e51bcbe1ee73bf9fee37e07b046fd6abd7323ede20b265b363e64ddcfa"
	I0821 10:35:46.453602   13501 cri.go:89] found id: ""
	I0821 10:35:46.453612   13501 logs.go:284] 1 containers: [a8b0b7e51bcbe1ee73bf9fee37e07b046fd6abd7323ede20b265b363e64ddcfa]
	I0821 10:35:46.453673   13501 ssh_runner.go:195] Run: which crictl
	I0821 10:35:46.457267   13501 logs.go:123] Gathering logs for kubelet ...
	I0821 10:35:46.457288   13501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0821 10:35:46.532343   13501 logs.go:123] Gathering logs for etcd [edc8ee70085051c43f733610fec1879fc253e9164445206acac2b1eb4d920335] ...
	I0821 10:35:46.532380   13501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edc8ee70085051c43f733610fec1879fc253e9164445206acac2b1eb4d920335"
	I0821 10:35:46.580396   13501 logs.go:123] Gathering logs for kube-proxy [404f545a04bcd8b91d4fb8aa9720e93c65f2c9389492360af8aff93f478f142e] ...
	I0821 10:35:46.580427   13501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 404f545a04bcd8b91d4fb8aa9720e93c65f2c9389492360af8aff93f478f142e"
	I0821 10:35:46.614411   13501 logs.go:123] Gathering logs for kube-controller-manager [1a31c61730b75649d11c090562b5c92f4f857612230fa9e0fae1e310661cb46b] ...
	I0821 10:35:46.614439   13501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a31c61730b75649d11c090562b5c92f4f857612230fa9e0fae1e310661cb46b"
	I0821 10:35:46.683630   13501 logs.go:123] Gathering logs for CRI-O ...
	I0821 10:35:46.683659   13501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0821 10:35:46.761429   13501 logs.go:123] Gathering logs for container status ...
	I0821 10:35:46.761462   13501 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0821 10:35:46.799716   13501 logs.go:123] Gathering logs for dmesg ...
	I0821 10:35:46.799743   13501 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0821 10:35:46.810793   13501 logs.go:123] Gathering logs for describe nodes ...
	I0821 10:35:46.810818   13501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0821 10:35:46.846924   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:46.948034   13501 logs.go:123] Gathering logs for kube-apiserver [29e2b1aebe3357e968461e20604ef50db93ec8e9d61ee16a64a1241b15eff62b] ...
	I0821 10:35:46.948062   13501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29e2b1aebe3357e968461e20604ef50db93ec8e9d61ee16a64a1241b15eff62b"
	I0821 10:35:47.004643   13501 logs.go:123] Gathering logs for coredns [f37f240cb72aa82529c30c747349e4a232d1b88f8e05e04d9aab08d5d42688a9] ...
	I0821 10:35:47.004671   13501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f37f240cb72aa82529c30c747349e4a232d1b88f8e05e04d9aab08d5d42688a9"
	I0821 10:35:47.068018   13501 logs.go:123] Gathering logs for kube-scheduler [70f840e974edb482cc3179696075ebd90d0d87e5a1a00a45e0c2843adf85208e] ...
	I0821 10:35:47.068056   13501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 70f840e974edb482cc3179696075ebd90d0d87e5a1a00a45e0c2843adf85208e"
	I0821 10:35:47.149393   13501 logs.go:123] Gathering logs for kindnet [a8b0b7e51bcbe1ee73bf9fee37e07b046fd6abd7323ede20b265b363e64ddcfa] ...
	I0821 10:35:47.149429   13501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8b0b7e51bcbe1ee73bf9fee37e07b046fd6abd7323ede20b265b363e64ddcfa"
	I0821 10:35:47.347371   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:47.846406   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:48.346205   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:48.846408   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:49.346519   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:49.682835   13501 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0821 10:35:49.687081   13501 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0821 10:35:49.688138   13501 api_server.go:141] control plane version: v1.27.4
	I0821 10:35:49.688164   13501 api_server.go:131] duration metric: took 3.735222245s to wait for apiserver health ...
	I0821 10:35:49.688174   13501 system_pods.go:43] waiting for kube-system pods to appear ...
	I0821 10:35:49.688197   13501 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0821 10:35:49.688252   13501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0821 10:35:49.721513   13501 cri.go:89] found id: "29e2b1aebe3357e968461e20604ef50db93ec8e9d61ee16a64a1241b15eff62b"
	I0821 10:35:49.721536   13501 cri.go:89] found id: ""
	I0821 10:35:49.721544   13501 logs.go:284] 1 containers: [29e2b1aebe3357e968461e20604ef50db93ec8e9d61ee16a64a1241b15eff62b]
	I0821 10:35:49.721585   13501 ssh_runner.go:195] Run: which crictl
	I0821 10:35:49.724826   13501 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0821 10:35:49.724882   13501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0821 10:35:49.756545   13501 cri.go:89] found id: "edc8ee70085051c43f733610fec1879fc253e9164445206acac2b1eb4d920335"
	I0821 10:35:49.756566   13501 cri.go:89] found id: ""
	I0821 10:35:49.756573   13501 logs.go:284] 1 containers: [edc8ee70085051c43f733610fec1879fc253e9164445206acac2b1eb4d920335]
	I0821 10:35:49.756617   13501 ssh_runner.go:195] Run: which crictl
	I0821 10:35:49.759700   13501 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0821 10:35:49.759750   13501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0821 10:35:49.790104   13501 cri.go:89] found id: "f37f240cb72aa82529c30c747349e4a232d1b88f8e05e04d9aab08d5d42688a9"
	I0821 10:35:49.790123   13501 cri.go:89] found id: ""
	I0821 10:35:49.790130   13501 logs.go:284] 1 containers: [f37f240cb72aa82529c30c747349e4a232d1b88f8e05e04d9aab08d5d42688a9]
	I0821 10:35:49.790186   13501 ssh_runner.go:195] Run: which crictl
	I0821 10:35:49.793173   13501 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0821 10:35:49.793229   13501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0821 10:35:49.823702   13501 cri.go:89] found id: "70f840e974edb482cc3179696075ebd90d0d87e5a1a00a45e0c2843adf85208e"
	I0821 10:35:49.823726   13501 cri.go:89] found id: ""
	I0821 10:35:49.823736   13501 logs.go:284] 1 containers: [70f840e974edb482cc3179696075ebd90d0d87e5a1a00a45e0c2843adf85208e]
	I0821 10:35:49.823782   13501 ssh_runner.go:195] Run: which crictl
	I0821 10:35:49.826799   13501 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0821 10:35:49.826845   13501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0821 10:35:49.846588   13501 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 10:35:49.858616   13501 cri.go:89] found id: "404f545a04bcd8b91d4fb8aa9720e93c65f2c9389492360af8aff93f478f142e"
	I0821 10:35:49.858635   13501 cri.go:89] found id: ""
	I0821 10:35:49.858643   13501 logs.go:284] 1 containers: [404f545a04bcd8b91d4fb8aa9720e93c65f2c9389492360af8aff93f478f142e]
	I0821 10:35:49.858690   13501 ssh_runner.go:195] Run: which crictl
	I0821 10:35:49.861759   13501 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0821 10:35:49.861809   13501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0821 10:35:49.892179   13501 cri.go:89] found id: "1a31c61730b75649d11c090562b5c92f4f857612230fa9e0fae1e310661cb46b"
	I0821 10:35:49.892202   13501 cri.go:89] found id: ""
	I0821 10:35:49.892213   13501 logs.go:284] 1 containers: [1a31c61730b75649d11c090562b5c92f4f857612230fa9e0fae1e310661cb46b]
	I0821 10:35:49.892262   13501 ssh_runner.go:195] Run: which crictl
	I0821 10:35:49.895320   13501 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0821 10:35:49.895381   13501 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0821 10:35:49.926399   13501 cri.go:89] found id: "a8b0b7e51bcbe1ee73bf9fee37e07b046fd6abd7323ede20b265b363e64ddcfa"
	I0821 10:35:49.926418   13501 cri.go:89] found id: ""
	I0821 10:35:49.926429   13501 logs.go:284] 1 containers: [a8b0b7e51bcbe1ee73bf9fee37e07b046fd6abd7323ede20b265b363e64ddcfa]
	I0821 10:35:49.926467   13501 ssh_runner.go:195] Run: which crictl
	I0821 10:35:49.929597   13501 logs.go:123] Gathering logs for coredns [f37f240cb72aa82529c30c747349e4a232d1b88f8e05e04d9aab08d5d42688a9] ...
	I0821 10:35:49.929619   13501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f37f240cb72aa82529c30c747349e4a232d1b88f8e05e04d9aab08d5d42688a9"
	I0821 10:35:49.962106   13501 logs.go:123] Gathering logs for kube-controller-manager [1a31c61730b75649d11c090562b5c92f4f857612230fa9e0fae1e310661cb46b] ...
	I0821 10:35:49.962130   13501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a31c61730b75649d11c090562b5c92f4f857612230fa9e0fae1e310661cb46b"
	I0821 10:35:50.011194   13501 logs.go:123] Gathering logs for kindnet [a8b0b7e51bcbe1ee73bf9fee37e07b046fd6abd7323ede20b265b363e64ddcfa] ...
	I0821 10:35:50.011228   13501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8b0b7e51bcbe1ee73bf9fee37e07b046fd6abd7323ede20b265b363e64ddcfa"
	I0821 10:35:50.043718   13501 logs.go:123] Gathering logs for dmesg ...
	I0821 10:35:50.043745   13501 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0821 10:35:50.054903   13501 logs.go:123] Gathering logs for etcd [edc8ee70085051c43f733610fec1879fc253e9164445206acac2b1eb4d920335] ...
	I0821 10:35:50.054927   13501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 edc8ee70085051c43f733610fec1879fc253e9164445206acac2b1eb4d920335"
	I0821 10:35:50.094886   13501 logs.go:123] Gathering logs for kube-apiserver [29e2b1aebe3357e968461e20604ef50db93ec8e9d61ee16a64a1241b15eff62b] ...
	I0821 10:35:50.094916   13501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 29e2b1aebe3357e968461e20604ef50db93ec8e9d61ee16a64a1241b15eff62b"
	I0821 10:35:50.135359   13501 logs.go:123] Gathering logs for kube-scheduler [70f840e974edb482cc3179696075ebd90d0d87e5a1a00a45e0c2843adf85208e] ...
	I0821 10:35:50.135388   13501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 70f840e974edb482cc3179696075ebd90d0d87e5a1a00a45e0c2843adf85208e"
	I0821 10:35:50.177204   13501 logs.go:123] Gathering logs for kube-proxy [404f545a04bcd8b91d4fb8aa9720e93c65f2c9389492360af8aff93f478f142e] ...
	I0821 10:35:50.177242   13501 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 404f545a04bcd8b91d4fb8aa9720e93c65f2c9389492360af8aff93f478f142e"
	I0821 10:35:50.210810   13501 logs.go:123] Gathering logs for CRI-O ...
	I0821 10:35:50.210834   13501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0821 10:35:50.286128   13501 logs.go:123] Gathering logs for container status ...
	I0821 10:35:50.286158   13501 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0821 10:35:50.326404   13501 logs.go:123] Gathering logs for kubelet ...
	I0821 10:35:50.326432   13501 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0821 10:35:50.346427   13501 kapi.go:107] duration metric: took 1m5.584216069s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0821 10:35:50.348538   13501 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, default-storageclass, ingress-dns, helm-tiller, inspektor-gadget, metrics-server, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0821 10:35:50.349907   13501 addons.go:502] enable addons completed in 1m12.710156683s: enabled=[storage-provisioner cloud-spanner default-storageclass ingress-dns helm-tiller inspektor-gadget metrics-server volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0821 10:35:50.401760   13501 logs.go:123] Gathering logs for describe nodes ...
	I0821 10:35:50.401791   13501 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0821 10:35:53.006039   13501 system_pods.go:59] 18 kube-system pods found
	I0821 10:35:53.006075   13501 system_pods.go:61] "coredns-5d78c9869d-499vt" [63a954fb-cd41-4a25-9150-0db15cc96e39] Running
	I0821 10:35:53.006081   13501 system_pods.go:61] "csi-hostpath-attacher-0" [4c0c0a5b-54a3-49f0-a323-54113e1b4fcd] Running
	I0821 10:35:53.006087   13501 system_pods.go:61] "csi-hostpath-resizer-0" [286291f9-6d9b-45d0-8f5f-10f3b0310229] Running
	I0821 10:35:53.006092   13501 system_pods.go:61] "csi-hostpathplugin-wkdp6" [9578d95d-733f-43a7-90c2-27c8165209a8] Running
	I0821 10:35:53.006096   13501 system_pods.go:61] "etcd-addons-351207" [36e4bf41-7d21-4bd7-ba76-7f927f3ca07d] Running
	I0821 10:35:53.006100   13501 system_pods.go:61] "kindnet-6sk4p" [46428db7-38cd-4cb2-8c85-3bdb166ed4d9] Running
	I0821 10:35:53.006104   13501 system_pods.go:61] "kube-apiserver-addons-351207" [ca9b6415-182e-4c60-be06-9cdcf4dd8d12] Running
	I0821 10:35:53.006108   13501 system_pods.go:61] "kube-controller-manager-addons-351207" [82fd4ecc-4b3b-4f87-b61c-f0c12fc310a5] Running
	I0821 10:35:53.006112   13501 system_pods.go:61] "kube-ingress-dns-minikube" [be443012-59a1-4f29-b1c9-286201a12290] Running
	I0821 10:35:53.006119   13501 system_pods.go:61] "kube-proxy-w4j8s" [a2fe243c-fe13-4e80-800a-a30f510aaa3c] Running
	I0821 10:35:53.006124   13501 system_pods.go:61] "kube-scheduler-addons-351207" [9ba5945e-28cf-45db-9727-36e2230a851c] Running
	I0821 10:35:53.006130   13501 system_pods.go:61] "metrics-server-7746886d4f-xl26c" [cdb34a8d-7efb-49db-88d0-c69ba81643b8] Running
	I0821 10:35:53.006135   13501 system_pods.go:61] "registry-proxy-srz8z" [f70199f4-b53c-4224-9c07-94522d99ba02] Running
	I0821 10:35:53.006146   13501 system_pods.go:61] "registry-rszhp" [049e7ace-e189-472f-a06b-acb5921f60c6] Running
	I0821 10:35:53.006150   13501 system_pods.go:61] "snapshot-controller-75bbb956b9-bh2sd" [2c532ff5-64e3-46b5-ba05-9a2c59863fdc] Running
	I0821 10:35:53.006154   13501 system_pods.go:61] "snapshot-controller-75bbb956b9-xndwk" [4bde75b5-068f-416f-b8bd-7d27246ec232] Running
	I0821 10:35:53.006158   13501 system_pods.go:61] "storage-provisioner" [e3765c7d-ba00-455a-9890-a64e52d3e239] Running
	I0821 10:35:53.006165   13501 system_pods.go:61] "tiller-deploy-6847666dc-87cll" [c5ec8a2f-92c6-4d65-8462-c018b253cf0d] Running
	I0821 10:35:53.006169   13501 system_pods.go:74] duration metric: took 3.317990494s to wait for pod list to return data ...
	I0821 10:35:53.006179   13501 default_sa.go:34] waiting for default service account to be created ...
	I0821 10:35:53.008009   13501 default_sa.go:45] found service account: "default"
	I0821 10:35:53.008025   13501 default_sa.go:55] duration metric: took 1.841342ms for default service account to be created ...
	I0821 10:35:53.008032   13501 system_pods.go:116] waiting for k8s-apps to be running ...
	I0821 10:35:53.015007   13501 system_pods.go:86] 18 kube-system pods found
	I0821 10:35:53.015027   13501 system_pods.go:89] "coredns-5d78c9869d-499vt" [63a954fb-cd41-4a25-9150-0db15cc96e39] Running
	I0821 10:35:53.015032   13501 system_pods.go:89] "csi-hostpath-attacher-0" [4c0c0a5b-54a3-49f0-a323-54113e1b4fcd] Running
	I0821 10:35:53.015037   13501 system_pods.go:89] "csi-hostpath-resizer-0" [286291f9-6d9b-45d0-8f5f-10f3b0310229] Running
	I0821 10:35:53.015041   13501 system_pods.go:89] "csi-hostpathplugin-wkdp6" [9578d95d-733f-43a7-90c2-27c8165209a8] Running
	I0821 10:35:53.015045   13501 system_pods.go:89] "etcd-addons-351207" [36e4bf41-7d21-4bd7-ba76-7f927f3ca07d] Running
	I0821 10:35:53.015050   13501 system_pods.go:89] "kindnet-6sk4p" [46428db7-38cd-4cb2-8c85-3bdb166ed4d9] Running
	I0821 10:35:53.015054   13501 system_pods.go:89] "kube-apiserver-addons-351207" [ca9b6415-182e-4c60-be06-9cdcf4dd8d12] Running
	I0821 10:35:53.015059   13501 system_pods.go:89] "kube-controller-manager-addons-351207" [82fd4ecc-4b3b-4f87-b61c-f0c12fc310a5] Running
	I0821 10:35:53.015063   13501 system_pods.go:89] "kube-ingress-dns-minikube" [be443012-59a1-4f29-b1c9-286201a12290] Running
	I0821 10:35:53.015067   13501 system_pods.go:89] "kube-proxy-w4j8s" [a2fe243c-fe13-4e80-800a-a30f510aaa3c] Running
	I0821 10:35:53.015072   13501 system_pods.go:89] "kube-scheduler-addons-351207" [9ba5945e-28cf-45db-9727-36e2230a851c] Running
	I0821 10:35:53.015078   13501 system_pods.go:89] "metrics-server-7746886d4f-xl26c" [cdb34a8d-7efb-49db-88d0-c69ba81643b8] Running
	I0821 10:35:53.015083   13501 system_pods.go:89] "registry-proxy-srz8z" [f70199f4-b53c-4224-9c07-94522d99ba02] Running
	I0821 10:35:53.015089   13501 system_pods.go:89] "registry-rszhp" [049e7ace-e189-472f-a06b-acb5921f60c6] Running
	I0821 10:35:53.015093   13501 system_pods.go:89] "snapshot-controller-75bbb956b9-bh2sd" [2c532ff5-64e3-46b5-ba05-9a2c59863fdc] Running
	I0821 10:35:53.015100   13501 system_pods.go:89] "snapshot-controller-75bbb956b9-xndwk" [4bde75b5-068f-416f-b8bd-7d27246ec232] Running
	I0821 10:35:53.015104   13501 system_pods.go:89] "storage-provisioner" [e3765c7d-ba00-455a-9890-a64e52d3e239] Running
	I0821 10:35:53.015108   13501 system_pods.go:89] "tiller-deploy-6847666dc-87cll" [c5ec8a2f-92c6-4d65-8462-c018b253cf0d] Running
	I0821 10:35:53.015113   13501 system_pods.go:126] duration metric: took 7.078148ms to wait for k8s-apps to be running ...
	I0821 10:35:53.015126   13501 system_svc.go:44] waiting for kubelet service to be running ....
	I0821 10:35:53.015164   13501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0821 10:35:53.025543   13501 system_svc.go:56] duration metric: took 10.411458ms WaitForService to wait for kubelet.
	I0821 10:35:53.025567   13501 kubeadm.go:581] duration metric: took 1m15.293277376s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0821 10:35:53.025589   13501 node_conditions.go:102] verifying NodePressure condition ...
	I0821 10:35:53.028127   13501 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0821 10:35:53.028150   13501 node_conditions.go:123] node cpu capacity is 8
	I0821 10:35:53.028162   13501 node_conditions.go:105] duration metric: took 2.568625ms to run NodePressure ...
	I0821 10:35:53.028171   13501 start.go:228] waiting for startup goroutines ...
	I0821 10:35:53.028177   13501 start.go:233] waiting for cluster config update ...
	I0821 10:35:53.028188   13501 start.go:242] writing updated cluster config ...
	I0821 10:35:53.029003   13501 ssh_runner.go:195] Run: rm -f paused
	I0821 10:35:53.075037   13501 start.go:600] kubectl: 1.28.0, cluster: 1.27.4 (minor skew: 1)
	I0821 10:35:53.137994   13501 out.go:177] * Done! kubectl is now configured to use "addons-351207" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Aug 21 10:38:31 addons-351207 crio[951]: time="2023-08-21 10:38:31.177215279Z" level=info msg="Pulled image: gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea" id=1250ffa6-ecd4-4b06-b64e-98b9a3bcf659 name=/runtime.v1.ImageService/PullImage
	Aug 21 10:38:31 addons-351207 crio[951]: time="2023-08-21 10:38:31.177947453Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=286aa89b-f2d7-4b28-b8cf-143011ac7903 name=/runtime.v1.ImageService/ImageStatus
	Aug 21 10:38:31 addons-351207 crio[951]: time="2023-08-21 10:38:31.178554666Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea],Size_:28496999,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=286aa89b-f2d7-4b28-b8cf-143011ac7903 name=/runtime.v1.ImageService/ImageStatus
	Aug 21 10:38:31 addons-351207 crio[951]: time="2023-08-21 10:38:31.179416343Z" level=info msg="Creating container: default/hello-world-app-65bdb79f98-hp49p/hello-world-app" id=d1c3523d-7fb0-4ef3-a1cd-2396cd840989 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 21 10:38:31 addons-351207 crio[951]: time="2023-08-21 10:38:31.179531476Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 21 10:38:31 addons-351207 crio[951]: time="2023-08-21 10:38:31.251785568Z" level=info msg="Created container 7783b80cf598a0c41fd084389542df0f32a002152d127eb335fa80afd8ba72d7: default/hello-world-app-65bdb79f98-hp49p/hello-world-app" id=d1c3523d-7fb0-4ef3-a1cd-2396cd840989 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 21 10:38:31 addons-351207 crio[951]: time="2023-08-21 10:38:31.252295450Z" level=info msg="Starting container: 7783b80cf598a0c41fd084389542df0f32a002152d127eb335fa80afd8ba72d7" id=36f277d0-50fc-4a36-a40c-d22ff9db721e name=/runtime.v1.RuntimeService/StartContainer
	Aug 21 10:38:31 addons-351207 crio[951]: time="2023-08-21 10:38:31.260753792Z" level=info msg="Started container" PID=10106 containerID=7783b80cf598a0c41fd084389542df0f32a002152d127eb335fa80afd8ba72d7 description=default/hello-world-app-65bdb79f98-hp49p/hello-world-app id=36f277d0-50fc-4a36-a40c-d22ff9db721e name=/runtime.v1.RuntimeService/StartContainer sandboxID=4de07d761127a5fca593161f0a18dc468e13b84416cbef0674e58ef4754b4cd4
	Aug 21 10:38:31 addons-351207 crio[951]: time="2023-08-21 10:38:31.643146405Z" level=info msg="Removing container: 2b7b80bdf1931f2ee774231255c3155149a0f46c4b32b3ef454fa834310bd13c" id=7a332aa2-64bd-4715-82c9-98bffc562e72 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 21 10:38:31 addons-351207 crio[951]: time="2023-08-21 10:38:31.659744992Z" level=info msg="Removed container 2b7b80bdf1931f2ee774231255c3155149a0f46c4b32b3ef454fa834310bd13c: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=7a332aa2-64bd-4715-82c9-98bffc562e72 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 21 10:38:32 addons-351207 crio[951]: time="2023-08-21 10:38:32.161557310Z" level=info msg="Stopping container: e6a98262d7a9c42db7f7cb0b495423725ca72cd66e7888482d47be4ed4e9b727 (timeout: 1s)" id=4456724c-57a4-4958-aa74-ea8d329aaf37 name=/runtime.v1.RuntimeService/StopContainer
	Aug 21 10:38:33 addons-351207 crio[951]: time="2023-08-21 10:38:33.171959620Z" level=warning msg="Stopping container e6a98262d7a9c42db7f7cb0b495423725ca72cd66e7888482d47be4ed4e9b727 with stop signal timed out: timeout reached after 1 seconds waiting for container process to exit" id=4456724c-57a4-4958-aa74-ea8d329aaf37 name=/runtime.v1.RuntimeService/StopContainer
	Aug 21 10:38:33 addons-351207 conmon[5443]: conmon e6a98262d7a9c42db7f7 <ninfo>: container 5455 exited with status 137
	Aug 21 10:38:33 addons-351207 crio[951]: time="2023-08-21 10:38:33.316870403Z" level=info msg="Stopped container e6a98262d7a9c42db7f7cb0b495423725ca72cd66e7888482d47be4ed4e9b727: ingress-nginx/ingress-nginx-controller-7799c6795f-n8cq5/controller" id=4456724c-57a4-4958-aa74-ea8d329aaf37 name=/runtime.v1.RuntimeService/StopContainer
	Aug 21 10:38:33 addons-351207 crio[951]: time="2023-08-21 10:38:33.317458873Z" level=info msg="Stopping pod sandbox: de9f4e088a4988cda815e8a364816a5ee892290e3265137d67259e7e2476d10b" id=62838c06-c6dd-47e7-af9d-c1626768c631 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 21 10:38:33 addons-351207 crio[951]: time="2023-08-21 10:38:33.320450742Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-AQLMHHZCK36YZ3XH - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-QHOT4MMSVOUHLLGU - [0:0]\n-X KUBE-HP-AQLMHHZCK36YZ3XH\n-X KUBE-HP-QHOT4MMSVOUHLLGU\nCOMMIT\n"
	Aug 21 10:38:33 addons-351207 crio[951]: time="2023-08-21 10:38:33.321792459Z" level=info msg="Closing host port tcp:80"
	Aug 21 10:38:33 addons-351207 crio[951]: time="2023-08-21 10:38:33.321831654Z" level=info msg="Closing host port tcp:443"
	Aug 21 10:38:33 addons-351207 crio[951]: time="2023-08-21 10:38:33.323251732Z" level=info msg="Host port tcp:80 does not have an open socket"
	Aug 21 10:38:33 addons-351207 crio[951]: time="2023-08-21 10:38:33.323275670Z" level=info msg="Host port tcp:443 does not have an open socket"
	Aug 21 10:38:33 addons-351207 crio[951]: time="2023-08-21 10:38:33.323485109Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7799c6795f-n8cq5 Namespace:ingress-nginx ID:de9f4e088a4988cda815e8a364816a5ee892290e3265137d67259e7e2476d10b UID:7dde345f-aa9c-4f01-88e5-385e8c84b005 NetNS:/var/run/netns/a83de4ce-e8f5-4853-90a6-9d13ed04f075 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 21 10:38:33 addons-351207 crio[951]: time="2023-08-21 10:38:33.323661966Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7799c6795f-n8cq5 from CNI network \"kindnet\" (type=ptp)"
	Aug 21 10:38:33 addons-351207 crio[951]: time="2023-08-21 10:38:33.364629202Z" level=info msg="Stopped pod sandbox: de9f4e088a4988cda815e8a364816a5ee892290e3265137d67259e7e2476d10b" id=62838c06-c6dd-47e7-af9d-c1626768c631 name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 21 10:38:33 addons-351207 crio[951]: time="2023-08-21 10:38:33.650250236Z" level=info msg="Removing container: e6a98262d7a9c42db7f7cb0b495423725ca72cd66e7888482d47be4ed4e9b727" id=7a651a00-3b64-47f3-b16a-c49a4868553a name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 21 10:38:33 addons-351207 crio[951]: time="2023-08-21 10:38:33.663864766Z" level=info msg="Removed container e6a98262d7a9c42db7f7cb0b495423725ca72cd66e7888482d47be4ed4e9b727: ingress-nginx/ingress-nginx-controller-7799c6795f-n8cq5/controller" id=7a651a00-3b64-47f3-b16a-c49a4868553a name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7783b80cf598a       gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea                      9 seconds ago       Running             hello-world-app           0                   4de07d761127a       hello-world-app-65bdb79f98-hp49p
	6ee0c5e01dd71       docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a                              2 minutes ago       Running             nginx                     0                   9fadc1d273f65       nginx
	dd58391f65e0b       ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552                        2 minutes ago       Running             headlamp                  0                   6a6e74e9f9c14       headlamp-5c78f74d8d-9h9n5
	b8cf6e263491c       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   a98969e0be6f0       gcp-auth-58478865f7-j687s
	5590089ee11fe       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d   3 minutes ago       Exited              patch                     0                   b212c4366f267       ingress-nginx-admission-patch-llcgl
	3e0aac5038461       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d   3 minutes ago       Exited              create                    0                   cf3bcfe4e3e0d       ingress-nginx-admission-create-hc8d2
	684bc2aebf91e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       0                   c693aedb1d855       storage-provisioner
	f37f240cb72aa       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             3 minutes ago       Running             coredns                   0                   cfed79eb078d1       coredns-5d78c9869d-499vt
	a8b0b7e51bcbe       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da                                                             4 minutes ago       Running             kindnet-cni               0                   6b3cda3423fc7       kindnet-6sk4p
	404f545a04bcd       6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4                                                             4 minutes ago       Running             kube-proxy                0                   da426d3a37b33       kube-proxy-w4j8s
	70f840e974edb       98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16                                                             4 minutes ago       Running             kube-scheduler            0                   db29da3760840       kube-scheduler-addons-351207
	29e2b1aebe335       e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c                                                             4 minutes ago       Running             kube-apiserver            0                   54a0a89ca352c       kube-apiserver-addons-351207
	1a31c61730b75       f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5                                                             4 minutes ago       Running             kube-controller-manager   0                   a846dd691b8fb       kube-controller-manager-addons-351207
	edc8ee7008505       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681                                                             4 minutes ago       Running             etcd                      0                   328ecf17b2409       etcd-addons-351207
	
	* 
	* ==> coredns [f37f240cb72aa82529c30c747349e4a232d1b88f8e05e04d9aab08d5d42688a9] <==
	* [INFO] 10.244.0.11:44670 - 3776 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000130375s
	[INFO] 10.244.0.11:45416 - 4335 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.005220103s
	[INFO] 10.244.0.11:45416 - 33514 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.005452736s
	[INFO] 10.244.0.11:59069 - 7703 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004248648s
	[INFO] 10.244.0.11:59069 - 12308 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005589637s
	[INFO] 10.244.0.11:50554 - 52677 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003707182s
	[INFO] 10.244.0.11:50554 - 39928 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004379217s
	[INFO] 10.244.0.11:47799 - 6842 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000070367s
	[INFO] 10.244.0.11:47799 - 6328 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000111435s
	[INFO] 10.244.0.18:45039 - 16835 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000177002s
	[INFO] 10.244.0.18:49230 - 48279 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00026917s
	[INFO] 10.244.0.18:52401 - 1718 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000116211s
	[INFO] 10.244.0.18:55156 - 9329 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000089255s
	[INFO] 10.244.0.18:49084 - 17678 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000147392s
	[INFO] 10.244.0.18:59765 - 46212 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000101075s
	[INFO] 10.244.0.18:53884 - 59204 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.005953019s
	[INFO] 10.244.0.18:58873 - 15496 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.00618467s
	[INFO] 10.244.0.18:59665 - 20035 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006916618s
	[INFO] 10.244.0.18:32859 - 45212 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007476814s
	[INFO] 10.244.0.18:43756 - 25864 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004552994s
	[INFO] 10.244.0.18:46343 - 42058 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00584908s
	[INFO] 10.244.0.18:47802 - 16570 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000684266s
	[INFO] 10.244.0.18:55415 - 32320 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000823411s
	[INFO] 10.244.0.20:39687 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000183464s
	[INFO] 10.244.0.20:44468 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000163728s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-351207
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-351207
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43
	                    minikube.k8s.io/name=addons-351207
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_21T10_34_25_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-351207
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 21 Aug 2023 10:34:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-351207
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 21 Aug 2023 10:38:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 21 Aug 2023 10:36:58 +0000   Mon, 21 Aug 2023 10:34:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 21 Aug 2023 10:36:58 +0000   Mon, 21 Aug 2023 10:34:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 21 Aug 2023 10:36:58 +0000   Mon, 21 Aug 2023 10:34:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 21 Aug 2023 10:36:58 +0000   Mon, 21 Aug 2023 10:35:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-351207
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	System Info:
	  Machine ID:                 850678747ca6446093f17f2ff6891d7a
	  System UUID:                d87dd1ec-01d4-4e63-a163-405c40ff80b3
	  Boot ID:                    19bba9d5-fb53-4c36-8f17-b39d772f0931
	  Kernel Version:             5.15.0-1039-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-65bdb79f98-hp49p         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  gcp-auth                    gcp-auth-58478865f7-j687s                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  headlamp                    headlamp-5c78f74d8d-9h9n5                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 coredns-5d78c9869d-499vt                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     4m3s
	  kube-system                 etcd-addons-351207                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m16s
	  kube-system                 kindnet-6sk4p                            100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m3s
	  kube-system                 kube-apiserver-addons-351207             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	  kube-system                 kube-controller-manager-addons-351207    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	  kube-system                 kube-proxy-w4j8s                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kube-scheduler-addons-351207             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m58s  kube-proxy       
	  Normal  Starting                 4m16s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m16s  kubelet          Node addons-351207 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m16s  kubelet          Node addons-351207 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m16s  kubelet          Node addons-351207 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m4s   node-controller  Node addons-351207 event: Registered Node addons-351207 in Controller
	  Normal  NodeReady                3m29s  kubelet          Node addons-351207 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.007371] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003099] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000668] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000607] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000646] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000619] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000607] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000639] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000650] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.001229] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +8.978024] kauditd_printk_skb: 36 callbacks suppressed
	[Aug21 10:36] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 56 86 ca 75 70 95 06 34 8f 95 55 ec 08 00
	[  +1.008304] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 56 86 ca 75 70 95 06 34 8f 95 55 ec 08 00
	[  +2.015768] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 56 86 ca 75 70 95 06 34 8f 95 55 ec 08 00
	[  +4.127585] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 86 ca 75 70 95 06 34 8f 95 55 ec 08 00
	[  +8.191218] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 86 ca 75 70 95 06 34 8f 95 55 ec 08 00
	[ +16.130427] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 86 ca 75 70 95 06 34 8f 95 55 ec 08 00
	[Aug21 10:37] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 86 ca 75 70 95 06 34 8f 95 55 ec 08 00
	
	* 
	* ==> etcd [edc8ee70085051c43f733610fec1879fc253e9164445206acac2b1eb4d920335] <==
	* {"level":"info","ts":"2023-08-21T10:34:41.139Z","caller":"traceutil/trace.go:171","msg":"trace[501574359] transaction","detail":"{read_only:false; response_revision:392; number_of_response:1; }","duration":"181.217699ms","start":"2023-08-21T10:34:40.958Z","end":"2023-08-21T10:34:41.139Z","steps":["trace[501574359] 'process raft request'  (duration: 94.444785ms)","trace[501574359] 'compare'  (duration: 85.565183ms)"],"step_count":2}
	{"level":"warn","ts":"2023-08-21T10:34:41.140Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.756071ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-351207\" ","response":"range_response_count:1 size:5654"}
	{"level":"info","ts":"2023-08-21T10:34:41.141Z","caller":"traceutil/trace.go:171","msg":"trace[1052005463] range","detail":"{range_begin:/registry/minions/addons-351207; range_end:; response_count:1; response_revision:394; }","duration":"105.400032ms","start":"2023-08-21T10:34:41.035Z","end":"2023-08-21T10:34:41.141Z","steps":["trace[1052005463] 'agreement among raft nodes before linearized reading'  (duration: 104.68067ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-21T10:34:41.140Z","caller":"traceutil/trace.go:171","msg":"trace[1869345403] linearizableReadLoop","detail":"{readStateIndex:403; appliedIndex:401; }","duration":"102.462538ms","start":"2023-08-21T10:34:41.035Z","end":"2023-08-21T10:34:41.138Z","steps":["trace[1869345403] 'read index received'  (duration: 16.509996ms)","trace[1869345403] 'applied index is now lower than readState.Index'  (duration: 85.951697ms)"],"step_count":2}
	{"level":"warn","ts":"2023-08-21T10:34:41.141Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.570275ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" ","response":"range_response_count:1 size:3142"}
	{"level":"info","ts":"2023-08-21T10:34:41.142Z","caller":"traceutil/trace.go:171","msg":"trace[1808274553] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:1; response_revision:395; }","duration":"102.667153ms","start":"2023-08-21T10:34:41.039Z","end":"2023-08-21T10:34:41.142Z","steps":["trace[1808274553] 'agreement among raft nodes before linearized reading'  (duration: 102.501543ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-21T10:34:41.142Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.890303ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-w4j8s\" ","response":"range_response_count:1 size:4422"}
	{"level":"info","ts":"2023-08-21T10:34:41.142Z","caller":"traceutil/trace.go:171","msg":"trace[1869410895] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-w4j8s; range_end:; response_count:1; response_revision:395; }","duration":"102.96367ms","start":"2023-08-21T10:34:41.039Z","end":"2023-08-21T10:34:41.142Z","steps":["trace[1869410895] 'agreement among raft nodes before linearized reading'  (duration: 102.855112ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-21T10:34:41.251Z","caller":"traceutil/trace.go:171","msg":"trace[1110861200] transaction","detail":"{read_only:false; response_revision:397; number_of_response:1; }","duration":"101.017346ms","start":"2023-08-21T10:34:41.150Z","end":"2023-08-21T10:34:41.251Z","steps":["trace[1110861200] 'process raft request'  (duration: 94.968781ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-21T10:34:41.254Z","caller":"traceutil/trace.go:171","msg":"trace[878623889] transaction","detail":"{read_only:false; response_revision:398; number_of_response:1; }","duration":"100.690809ms","start":"2023-08-21T10:34:41.153Z","end":"2023-08-21T10:34:41.254Z","steps":["trace[878623889] 'process raft request'  (duration: 93.616586ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-21T10:34:41.956Z","caller":"traceutil/trace.go:171","msg":"trace[1168057022] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"107.916931ms","start":"2023-08-21T10:34:41.848Z","end":"2023-08-21T10:34:41.956Z","steps":["trace[1168057022] 'process raft request'  (duration: 107.35687ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-21T10:34:41.956Z","caller":"traceutil/trace.go:171","msg":"trace[1251301997] linearizableReadLoop","detail":"{readStateIndex:430; appliedIndex:427; }","duration":"108.19479ms","start":"2023-08-21T10:34:41.848Z","end":"2023-08-21T10:34:41.956Z","steps":["trace[1251301997] 'read index received'  (duration: 8.429002ms)","trace[1251301997] 'applied index is now lower than readState.Index'  (duration: 99.764704ms)"],"step_count":2}
	{"level":"warn","ts":"2023-08-21T10:34:41.957Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.826793ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/kube-system/registry\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-08-21T10:34:41.957Z","caller":"traceutil/trace.go:171","msg":"trace[1818108904] range","detail":"{range_begin:/registry/services/specs/kube-system/registry; range_end:; response_count:0; response_revision:422; }","duration":"108.930029ms","start":"2023-08-21T10:34:41.848Z","end":"2023-08-21T10:34:41.957Z","steps":["trace[1818108904] 'agreement among raft nodes before linearized reading'  (duration: 108.782923ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-21T10:34:41.957Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.028475ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replication-controller\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2023-08-21T10:34:41.957Z","caller":"traceutil/trace.go:171","msg":"trace[1199523015] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replication-controller; range_end:; response_count:1; response_revision:422; }","duration":"109.103598ms","start":"2023-08-21T10:34:41.848Z","end":"2023-08-21T10:34:41.957Z","steps":["trace[1199523015] 'agreement among raft nodes before linearized reading'  (duration: 108.982661ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-21T10:35:58.955Z","caller":"traceutil/trace.go:171","msg":"trace[538747193] linearizableReadLoop","detail":"{readStateIndex:1143; appliedIndex:1142; }","duration":"146.22385ms","start":"2023-08-21T10:35:58.809Z","end":"2023-08-21T10:35:58.955Z","steps":["trace[538747193] 'read index received'  (duration: 72.468553ms)","trace[538747193] 'applied index is now lower than readState.Index'  (duration: 73.75458ms)"],"step_count":2}
	{"level":"info","ts":"2023-08-21T10:35:58.955Z","caller":"traceutil/trace.go:171","msg":"trace[1632612462] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1110; }","duration":"147.267494ms","start":"2023-08-21T10:35:58.808Z","end":"2023-08-21T10:35:58.955Z","steps":["trace[1632612462] 'process raft request'  (duration: 73.44046ms)","trace[1632612462] 'compare'  (duration: 73.642449ms)"],"step_count":2}
	{"level":"warn","ts":"2023-08-21T10:35:58.955Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.416821ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/cloud-spanner-emulator\" ","response":"range_response_count:1 size:687"}
	{"level":"info","ts":"2023-08-21T10:35:58.956Z","caller":"traceutil/trace.go:171","msg":"trace[1512243333] range","detail":"{range_begin:/registry/services/endpoints/default/cloud-spanner-emulator; range_end:; response_count:1; response_revision:1110; }","duration":"147.337719ms","start":"2023-08-21T10:35:58.809Z","end":"2023-08-21T10:35:58.956Z","steps":["trace[1512243333] 'agreement among raft nodes before linearized reading'  (duration: 146.339107ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-21T10:35:58.956Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.273745ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/default/cloud-spanner-emulator-ktwsb\" ","response":"range_response_count:1 size:1242"}
	{"level":"info","ts":"2023-08-21T10:35:58.957Z","caller":"traceutil/trace.go:171","msg":"trace[898019379] range","detail":"{range_begin:/registry/endpointslices/default/cloud-spanner-emulator-ktwsb; range_end:; response_count:1; response_revision:1110; }","duration":"122.248089ms","start":"2023-08-21T10:35:58.835Z","end":"2023-08-21T10:35:58.957Z","steps":["trace[898019379] 'agreement among raft nodes before linearized reading'  (duration: 120.245344ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-21T10:35:58.956Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.168818ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/cloud-spanner-emulator\" ","response":"range_response_count:1 size:687"}
	{"level":"info","ts":"2023-08-21T10:35:58.958Z","caller":"traceutil/trace.go:171","msg":"trace[797782902] range","detail":"{range_begin:/registry/services/endpoints/default/cloud-spanner-emulator; range_end:; response_count:1; response_revision:1110; }","duration":"122.313258ms","start":"2023-08-21T10:35:58.835Z","end":"2023-08-21T10:35:58.958Z","steps":["trace[797782902] 'agreement among raft nodes before linearized reading'  (duration: 120.120199ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-21T10:36:04.487Z","caller":"traceutil/trace.go:171","msg":"trace[357013417] transaction","detail":"{read_only:false; response_revision:1152; number_of_response:1; }","duration":"104.454209ms","start":"2023-08-21T10:36:04.382Z","end":"2023-08-21T10:36:04.487Z","steps":["trace[357013417] 'process raft request'  (duration: 93.498938ms)","trace[357013417] 'compare'  (duration: 10.79465ms)"],"step_count":2}
	
	* 
	* ==> gcp-auth [b8cf6e263491c5d078bb3e25af12696a99b846e7f67a3561412930572445c56c] <==
	* 2023/08/21 10:35:43 GCP Auth Webhook started!
	2023/08/21 10:35:54 Ready to marshal response ...
	2023/08/21 10:35:54 Ready to write response ...
	2023/08/21 10:35:54 Ready to marshal response ...
	2023/08/21 10:35:54 Ready to write response ...
	2023/08/21 10:35:54 Ready to marshal response ...
	2023/08/21 10:35:54 Ready to write response ...
	2023/08/21 10:36:03 Ready to marshal response ...
	2023/08/21 10:36:03 Ready to write response ...
	2023/08/21 10:36:04 Ready to marshal response ...
	2023/08/21 10:36:04 Ready to write response ...
	2023/08/21 10:36:09 Ready to marshal response ...
	2023/08/21 10:36:09 Ready to write response ...
	2023/08/21 10:36:30 Ready to marshal response ...
	2023/08/21 10:36:30 Ready to write response ...
	2023/08/21 10:36:54 Ready to marshal response ...
	2023/08/21 10:36:54 Ready to write response ...
	2023/08/21 10:38:29 Ready to marshal response ...
	2023/08/21 10:38:29 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  10:38:40 up 21 min,  0 users,  load average: 0.36, 0.49, 0.25
	Linux addons-351207 5.15.0-1039-gcp #47~20.04.1-Ubuntu SMP Thu Jul 27 22:40:03 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [a8b0b7e51bcbe1ee73bf9fee37e07b046fd6abd7323ede20b265b363e64ddcfa] <==
	* I0821 10:36:31.044677       1 main.go:227] handling current node
	I0821 10:36:41.054807       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 10:36:41.054870       1 main.go:227] handling current node
	I0821 10:36:51.066210       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 10:36:51.066229       1 main.go:227] handling current node
	I0821 10:37:01.078430       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 10:37:01.078458       1 main.go:227] handling current node
	I0821 10:37:11.082145       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 10:37:11.082165       1 main.go:227] handling current node
	I0821 10:37:21.090462       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 10:37:21.090483       1 main.go:227] handling current node
	I0821 10:37:31.094272       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 10:37:31.094303       1 main.go:227] handling current node
	I0821 10:37:41.097840       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 10:37:41.097865       1 main.go:227] handling current node
	I0821 10:37:51.107226       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 10:37:51.107249       1 main.go:227] handling current node
	I0821 10:38:01.118500       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 10:38:01.118523       1 main.go:227] handling current node
	I0821 10:38:11.130794       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 10:38:11.130816       1 main.go:227] handling current node
	I0821 10:38:21.134599       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 10:38:21.134622       1 main.go:227] handling current node
	I0821 10:38:31.147341       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 10:38:31.147380       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [29e2b1aebe3357e968461e20604ef50db93ec8e9d61ee16a64a1241b15eff62b] <==
	* I0821 10:37:10.744450       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:37:10.750731       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:37:10.750795       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:37:10.757197       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:37:10.757250       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:37:10.758176       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:37:10.758217       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:37:10.768048       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:37:10.768098       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:37:10.771967       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:37:10.772605       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:37:10.780408       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:37:10.780462       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 10:37:10.781173       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 10:37:10.781471       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0821 10:37:11.758210       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0821 10:37:11.782234       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0821 10:37:11.837626       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0821 10:37:42.849295       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0821 10:37:42.849327       1 handler_proxy.go:100] no RequestInfo found in the context
	E0821 10:37:42.849368       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0821 10:37:42.849378       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0821 10:38:30.025322       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs=map[IPv4:10.103.194.46]
	E0821 10:38:32.172998       1 authentication.go:70] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	* 
	* ==> kube-controller-manager [1a31c61730b75649d11c090562b5c92f4f857612230fa9e0fae1e310661cb46b] <==
	* E0821 10:37:31.097472       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0821 10:37:36.461428       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0821 10:37:36.461455       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0821 10:37:37.158532       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0821 10:37:37.158570       1 shared_informer.go:318] Caches are synced for resource quota
	I0821 10:37:37.469006       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0821 10:37:37.469053       1 shared_informer.go:318] Caches are synced for garbage collector
	W0821 10:37:44.996813       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0821 10:37:44.996856       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0821 10:37:51.015534       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0821 10:37:51.015561       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0821 10:37:53.748390       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0821 10:37:53.748417       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0821 10:38:21.245337       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0821 10:38:21.245366       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0821 10:38:24.409863       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0821 10:38:24.409905       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0821 10:38:26.397778       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0821 10:38:26.397809       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0821 10:38:29.878410       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-65bdb79f98 to 1"
	I0821 10:38:29.888534       1 event.go:307] "Event occurred" object="default/hello-world-app-65bdb79f98" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-65bdb79f98-hp49p"
	I0821 10:38:32.153368       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-create
	I0821 10:38:32.157281       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	W0821 10:38:35.533116       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0821 10:38:35.533145       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [404f545a04bcd8b91d4fb8aa9720e93c65f2c9389492360af8aff93f478f142e] <==
	* I0821 10:34:41.240716       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0821 10:34:41.240930       1 server_others.go:110] "Detected node IP" address="192.168.49.2"
	I0821 10:34:41.240975       1 server_others.go:554] "Using iptables proxy"
	I0821 10:34:41.840710       1 server_others.go:192] "Using iptables Proxier"
	I0821 10:34:41.840751       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0821 10:34:41.840763       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0821 10:34:41.840783       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0821 10:34:41.840818       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0821 10:34:41.841340       1 server.go:658] "Version info" version="v1.27.4"
	I0821 10:34:41.841363       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0821 10:34:41.842669       1 config.go:188] "Starting service config controller"
	I0821 10:34:41.842689       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0821 10:34:41.842710       1 config.go:97] "Starting endpoint slice config controller"
	I0821 10:34:41.842714       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0821 10:34:41.843133       1 config.go:315] "Starting node config controller"
	I0821 10:34:41.843150       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0821 10:34:41.954974       1 shared_informer.go:318] Caches are synced for node config
	I0821 10:34:41.955023       1 shared_informer.go:318] Caches are synced for service config
	I0821 10:34:41.955044       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [70f840e974edb482cc3179696075ebd90d0d87e5a1a00a45e0c2843adf85208e] <==
	* W0821 10:34:22.038754       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0821 10:34:22.038770       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0821 10:34:22.039063       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0821 10:34:22.039124       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0821 10:34:22.039235       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0821 10:34:22.039289       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0821 10:34:22.039348       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0821 10:34:22.039385       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0821 10:34:22.039506       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0821 10:34:22.039567       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0821 10:34:22.039621       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0821 10:34:22.039678       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0821 10:34:22.039521       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0821 10:34:22.039759       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0821 10:34:22.039782       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0821 10:34:22.039767       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0821 10:34:22.039830       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0821 10:34:22.039876       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0821 10:34:22.843115       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0821 10:34:22.843143       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0821 10:34:22.872779       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0821 10:34:22.872811       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0821 10:34:23.048481       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0821 10:34:23.048514       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0821 10:34:24.862262       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Aug 21 10:38:30 addons-351207 kubelet[1561]: I0821 10:38:30.052996    1561 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/3f2e1f72-662f-4d9c-814e-776dec2606bc-gcp-creds\") pod \"hello-world-app-65bdb79f98-hp49p\" (UID: \"3f2e1f72-662f-4d9c-814e-776dec2606bc\") " pod="default/hello-world-app-65bdb79f98-hp49p"
	Aug 21 10:38:30 addons-351207 kubelet[1561]: W0821 10:38:30.299984    1561 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/9ad89cf2faa2f5edc93411156710dc311b3eb3de4b27bb438072822b7f60994c/crio-4de07d761127a5fca593161f0a18dc468e13b84416cbef0674e58ef4754b4cd4 WatchSource:0}: Error finding container 4de07d761127a5fca593161f0a18dc468e13b84416cbef0674e58ef4754b4cd4: Status 404 returned error can't find the container with id 4de07d761127a5fca593161f0a18dc468e13b84416cbef0674e58ef4754b4cd4
	Aug 21 10:38:31 addons-351207 kubelet[1561]: I0821 10:38:31.061953    1561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w64tr\" (UniqueName: \"kubernetes.io/projected/be443012-59a1-4f29-b1c9-286201a12290-kube-api-access-w64tr\") pod \"be443012-59a1-4f29-b1c9-286201a12290\" (UID: \"be443012-59a1-4f29-b1c9-286201a12290\") "
	Aug 21 10:38:31 addons-351207 kubelet[1561]: I0821 10:38:31.063618    1561 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be443012-59a1-4f29-b1c9-286201a12290-kube-api-access-w64tr" (OuterVolumeSpecName: "kube-api-access-w64tr") pod "be443012-59a1-4f29-b1c9-286201a12290" (UID: "be443012-59a1-4f29-b1c9-286201a12290"). InnerVolumeSpecName "kube-api-access-w64tr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 21 10:38:31 addons-351207 kubelet[1561]: I0821 10:38:31.163081    1561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-w64tr\" (UniqueName: \"kubernetes.io/projected/be443012-59a1-4f29-b1c9-286201a12290-kube-api-access-w64tr\") on node \"addons-351207\" DevicePath \"\""
	Aug 21 10:38:31 addons-351207 kubelet[1561]: I0821 10:38:31.642153    1561 scope.go:115] "RemoveContainer" containerID="2b7b80bdf1931f2ee774231255c3155149a0f46c4b32b3ef454fa834310bd13c"
	Aug 21 10:38:31 addons-351207 kubelet[1561]: I0821 10:38:31.654232    1561 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-65bdb79f98-hp49p" podStartSLOduration=1.780440079 podCreationTimestamp="2023-08-21 10:38:29 +0000 UTC" firstStartedPulling="2023-08-21 10:38:30.303740434 +0000 UTC m=+245.810666974" lastFinishedPulling="2023-08-21 10:38:31.177488028 +0000 UTC m=+246.684414560" observedRunningTime="2023-08-21 10:38:31.65369357 +0000 UTC m=+247.160620117" watchObservedRunningTime="2023-08-21 10:38:31.654187665 +0000 UTC m=+247.161114214"
	Aug 21 10:38:31 addons-351207 kubelet[1561]: I0821 10:38:31.660022    1561 scope.go:115] "RemoveContainer" containerID="2b7b80bdf1931f2ee774231255c3155149a0f46c4b32b3ef454fa834310bd13c"
	Aug 21 10:38:31 addons-351207 kubelet[1561]: E0821 10:38:31.660456    1561 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b7b80bdf1931f2ee774231255c3155149a0f46c4b32b3ef454fa834310bd13c\": container with ID starting with 2b7b80bdf1931f2ee774231255c3155149a0f46c4b32b3ef454fa834310bd13c not found: ID does not exist" containerID="2b7b80bdf1931f2ee774231255c3155149a0f46c4b32b3ef454fa834310bd13c"
	Aug 21 10:38:31 addons-351207 kubelet[1561]: I0821 10:38:31.660504    1561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:2b7b80bdf1931f2ee774231255c3155149a0f46c4b32b3ef454fa834310bd13c} err="failed to get container status \"2b7b80bdf1931f2ee774231255c3155149a0f46c4b32b3ef454fa834310bd13c\": rpc error: code = NotFound desc = could not find container \"2b7b80bdf1931f2ee774231255c3155149a0f46c4b32b3ef454fa834310bd13c\": container with ID starting with 2b7b80bdf1931f2ee774231255c3155149a0f46c4b32b3ef454fa834310bd13c not found: ID does not exist"
	Aug 21 10:38:32 addons-351207 kubelet[1561]: E0821 10:38:32.162923    1561 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7799c6795f-n8cq5.177d5fbfc21733fc", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7799c6795f-n8cq5", UID:"7dde345f-aa9c-4f01-88e5-385e8c84b005", APIVersion:"v1", ResourceVersion:"746", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stopping container controller", Source:v1.EventSource{Componen
t:"kubelet", Host:"addons-351207"}, FirstTimestamp:time.Date(2023, time.August, 21, 10, 38, 32, 160867324, time.Local), LastTimestamp:time.Date(2023, time.August, 21, 10, 38, 32, 160867324, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7799c6795f-n8cq5.177d5fbfc21733fc" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 21 10:38:32 addons-351207 kubelet[1561]: I0821 10:38:32.583012    1561 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=1747aef9-ae0d-402b-b25f-367a1df531b0 path="/var/lib/kubelet/pods/1747aef9-ae0d-402b-b25f-367a1df531b0/volumes"
	Aug 21 10:38:32 addons-351207 kubelet[1561]: I0821 10:38:32.583315    1561 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=2f0c9eff-c6b8-40c2-b2bf-a67bacbb4617 path="/var/lib/kubelet/pods/2f0c9eff-c6b8-40c2-b2bf-a67bacbb4617/volumes"
	Aug 21 10:38:32 addons-351207 kubelet[1561]: I0821 10:38:32.583634    1561 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=be443012-59a1-4f29-b1c9-286201a12290 path="/var/lib/kubelet/pods/be443012-59a1-4f29-b1c9-286201a12290/volumes"
	Aug 21 10:38:33 addons-351207 kubelet[1561]: I0821 10:38:33.477436    1561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj5rk\" (UniqueName: \"kubernetes.io/projected/7dde345f-aa9c-4f01-88e5-385e8c84b005-kube-api-access-wj5rk\") pod \"7dde345f-aa9c-4f01-88e5-385e8c84b005\" (UID: \"7dde345f-aa9c-4f01-88e5-385e8c84b005\") "
	Aug 21 10:38:33 addons-351207 kubelet[1561]: I0821 10:38:33.477486    1561 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7dde345f-aa9c-4f01-88e5-385e8c84b005-webhook-cert\") pod \"7dde345f-aa9c-4f01-88e5-385e8c84b005\" (UID: \"7dde345f-aa9c-4f01-88e5-385e8c84b005\") "
	Aug 21 10:38:33 addons-351207 kubelet[1561]: I0821 10:38:33.479165    1561 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dde345f-aa9c-4f01-88e5-385e8c84b005-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "7dde345f-aa9c-4f01-88e5-385e8c84b005" (UID: "7dde345f-aa9c-4f01-88e5-385e8c84b005"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 21 10:38:33 addons-351207 kubelet[1561]: I0821 10:38:33.479681    1561 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dde345f-aa9c-4f01-88e5-385e8c84b005-kube-api-access-wj5rk" (OuterVolumeSpecName: "kube-api-access-wj5rk") pod "7dde345f-aa9c-4f01-88e5-385e8c84b005" (UID: "7dde345f-aa9c-4f01-88e5-385e8c84b005"). InnerVolumeSpecName "kube-api-access-wj5rk". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 21 10:38:33 addons-351207 kubelet[1561]: I0821 10:38:33.578346    1561 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wj5rk\" (UniqueName: \"kubernetes.io/projected/7dde345f-aa9c-4f01-88e5-385e8c84b005-kube-api-access-wj5rk\") on node \"addons-351207\" DevicePath \"\""
	Aug 21 10:38:33 addons-351207 kubelet[1561]: I0821 10:38:33.578383    1561 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7dde345f-aa9c-4f01-88e5-385e8c84b005-webhook-cert\") on node \"addons-351207\" DevicePath \"\""
	Aug 21 10:38:33 addons-351207 kubelet[1561]: I0821 10:38:33.649329    1561 scope.go:115] "RemoveContainer" containerID="e6a98262d7a9c42db7f7cb0b495423725ca72cd66e7888482d47be4ed4e9b727"
	Aug 21 10:38:33 addons-351207 kubelet[1561]: I0821 10:38:33.664067    1561 scope.go:115] "RemoveContainer" containerID="e6a98262d7a9c42db7f7cb0b495423725ca72cd66e7888482d47be4ed4e9b727"
	Aug 21 10:38:33 addons-351207 kubelet[1561]: E0821 10:38:33.664442    1561 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6a98262d7a9c42db7f7cb0b495423725ca72cd66e7888482d47be4ed4e9b727\": container with ID starting with e6a98262d7a9c42db7f7cb0b495423725ca72cd66e7888482d47be4ed4e9b727 not found: ID does not exist" containerID="e6a98262d7a9c42db7f7cb0b495423725ca72cd66e7888482d47be4ed4e9b727"
	Aug 21 10:38:33 addons-351207 kubelet[1561]: I0821 10:38:33.664482    1561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:e6a98262d7a9c42db7f7cb0b495423725ca72cd66e7888482d47be4ed4e9b727} err="failed to get container status \"e6a98262d7a9c42db7f7cb0b495423725ca72cd66e7888482d47be4ed4e9b727\": rpc error: code = NotFound desc = could not find container \"e6a98262d7a9c42db7f7cb0b495423725ca72cd66e7888482d47be4ed4e9b727\": container with ID starting with e6a98262d7a9c42db7f7cb0b495423725ca72cd66e7888482d47be4ed4e9b727 not found: ID does not exist"
	Aug 21 10:38:34 addons-351207 kubelet[1561]: I0821 10:38:34.583099    1561 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=7dde345f-aa9c-4f01-88e5-385e8c84b005 path="/var/lib/kubelet/pods/7dde345f-aa9c-4f01-88e5-385e8c84b005/volumes"
	
	* 
	* ==> storage-provisioner [684bc2aebf91ef52da385a760acfa3bb52aac826b4c4f2171a3e1ddbdc7f0bee] <==
	* I0821 10:35:11.958947       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0821 10:35:11.967126       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0821 10:35:11.967169       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0821 10:35:11.973697       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0821 10:35:11.973849       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-351207_7e3d720d-34f2-419c-b36c-f492f67b8414!
	I0821 10:35:11.973865       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1a71d2c3-16c2-4e85-8b87-33e0ab919db6", APIVersion:"v1", ResourceVersion:"830", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-351207_7e3d720d-34f2-419c-b36c-f492f67b8414 became leader
	I0821 10:35:12.074033       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-351207_7e3d720d-34f2-419c-b36c-f492f67b8414!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-351207 -n addons-351207
helpers_test.go:261: (dbg) Run:  kubectl --context addons-351207 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (152.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (11.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 image load --daemon gcr.io/google-containers/addon-resizer:functional-923429 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-923429 image load --daemon gcr.io/google-containers/addon-resizer:functional-923429 --alsologtostderr: (9.609631722s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-923429 image ls: (2.209095412s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-923429" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (11.82s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (184.33s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-218089 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-218089 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (17.223585595s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-218089 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-218089 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [1b19c02d-8a06-45ac-85ee-41d99001bf29] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [1b19c02d-8a06-45ac-85ee-41d99001bf29] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.007149436s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-218089 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0821 10:45:53.215874   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/client.crt: no such file or directory
E0821 10:46:20.900257   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/client.crt: no such file or directory
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-218089 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.478026626s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-218089 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-218089 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E0821 10:47:20.792372   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/functional-923429/client.crt: no such file or directory
E0821 10:47:20.797679   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/functional-923429/client.crt: no such file or directory
E0821 10:47:20.807895   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/functional-923429/client.crt: no such file or directory
E0821 10:47:20.828180   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/functional-923429/client.crt: no such file or directory
E0821 10:47:20.868459   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/functional-923429/client.crt: no such file or directory
E0821 10:47:20.948811   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/functional-923429/client.crt: no such file or directory
E0821 10:47:21.109214   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/functional-923429/client.crt: no such file or directory
E0821 10:47:21.429815   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/functional-923429/client.crt: no such file or directory
E0821 10:47:22.070681   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/functional-923429/client.crt: no such file or directory
E0821 10:47:23.350905   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/functional-923429/client.crt: no such file or directory
E0821 10:47:25.912718   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/functional-923429/client.crt: no such file or directory
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.01126037s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-218089 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-218089 addons disable ingress-dns --alsologtostderr -v=1: (1.710071561s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-218089 addons disable ingress --alsologtostderr -v=1
E0821 10:47:31.033413   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/functional-923429/client.crt: no such file or directory
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-218089 addons disable ingress --alsologtostderr -v=1: (7.386935955s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-218089
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-218089:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "13b46a8946be8c48fa2eff2b81309ff44c3c400a215536d3c1ff6bcb5a8b98da",
	        "Created": "2023-08-21T10:43:27.584849365Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 52719,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-21T10:43:27.860451256Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/13b46a8946be8c48fa2eff2b81309ff44c3c400a215536d3c1ff6bcb5a8b98da/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/13b46a8946be8c48fa2eff2b81309ff44c3c400a215536d3c1ff6bcb5a8b98da/hostname",
	        "HostsPath": "/var/lib/docker/containers/13b46a8946be8c48fa2eff2b81309ff44c3c400a215536d3c1ff6bcb5a8b98da/hosts",
	        "LogPath": "/var/lib/docker/containers/13b46a8946be8c48fa2eff2b81309ff44c3c400a215536d3c1ff6bcb5a8b98da/13b46a8946be8c48fa2eff2b81309ff44c3c400a215536d3c1ff6bcb5a8b98da-json.log",
	        "Name": "/ingress-addon-legacy-218089",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-218089:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ingress-addon-legacy-218089",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/63905fec0c741b44e3c98f5e435194e182c423729eebb96849cd61296daecab9-init/diff:/var/lib/docker/overlay2/524bb0f129210e266d288d085768bab72d4735717d72ebbb4611a7bc558cb4ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/63905fec0c741b44e3c98f5e435194e182c423729eebb96849cd61296daecab9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/63905fec0c741b44e3c98f5e435194e182c423729eebb96849cd61296daecab9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/63905fec0c741b44e3c98f5e435194e182c423729eebb96849cd61296daecab9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-218089",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-218089/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-218089",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-218089",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-218089",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f40110baea74bd34e42b267e850acb3857158ce7602944482b808d364ac15af8",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f40110baea74",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-218089": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "13b46a8946be",
	                        "ingress-addon-legacy-218089"
	                    ],
	                    "NetworkID": "441cbe7903333ff4037bc12121157a437a7c1eb16dfd52241e98bc8da641ec0d",
	                    "EndpointID": "f9cd8bbf536e008756633f42a337c6bd56cccd10628ce2f78fd40c38f8e34504",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-218089 -n ingress-addon-legacy-218089
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-218089 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-218089 logs -n 25: (1.013027815s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                 Args                 |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| addons         | functional-923429 addons list        | functional-923429           | jenkins | v1.31.2 | 21 Aug 23 10:42 UTC | 21 Aug 23 10:42 UTC |
	|                | -o json                              |                             |         |         |                     |                     |
	| update-context | functional-923429                    | functional-923429           | jenkins | v1.31.2 | 21 Aug 23 10:42 UTC | 21 Aug 23 10:42 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| update-context | functional-923429                    | functional-923429           | jenkins | v1.31.2 | 21 Aug 23 10:42 UTC | 21 Aug 23 10:42 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| update-context | functional-923429                    | functional-923429           | jenkins | v1.31.2 | 21 Aug 23 10:42 UTC | 21 Aug 23 10:42 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| image          | functional-923429                    | functional-923429           | jenkins | v1.31.2 | 21 Aug 23 10:42 UTC | 21 Aug 23 10:42 UTC |
	|                | image ls --format short              |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-923429                    | functional-923429           | jenkins | v1.31.2 | 21 Aug 23 10:42 UTC | 21 Aug 23 10:42 UTC |
	|                | image ls --format yaml               |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| ssh            | functional-923429 ssh pgrep          | functional-923429           | jenkins | v1.31.2 | 21 Aug 23 10:42 UTC |                     |
	|                | buildkitd                            |                             |         |         |                     |                     |
	| image          | functional-923429 image build -t     | functional-923429           | jenkins | v1.31.2 | 21 Aug 23 10:42 UTC | 21 Aug 23 10:42 UTC |
	|                | localhost/my-image:functional-923429 |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr     |                             |         |         |                     |                     |
	| service        | functional-923429 service list       | functional-923429           | jenkins | v1.31.2 | 21 Aug 23 10:42 UTC | 21 Aug 23 10:42 UTC |
	| image          | functional-923429 image ls           | functional-923429           | jenkins | v1.31.2 | 21 Aug 23 10:42 UTC | 21 Aug 23 10:42 UTC |
	| service        | functional-923429 service            | functional-923429           | jenkins | v1.31.2 | 21 Aug 23 10:42 UTC | 21 Aug 23 10:42 UTC |
	|                | hello-node-connect --url             |                             |         |         |                     |                     |
	| image          | functional-923429                    | functional-923429           | jenkins | v1.31.2 | 21 Aug 23 10:42 UTC | 21 Aug 23 10:42 UTC |
	|                | image ls --format json               |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-923429                    | functional-923429           | jenkins | v1.31.2 | 21 Aug 23 10:42 UTC | 21 Aug 23 10:42 UTC |
	|                | image ls --format table              |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| service        | functional-923429 service list       | functional-923429           | jenkins | v1.31.2 | 21 Aug 23 10:42 UTC | 21 Aug 23 10:43 UTC |
	|                | -o json                              |                             |         |         |                     |                     |
	| service        | functional-923429 service            | functional-923429           | jenkins | v1.31.2 | 21 Aug 23 10:43 UTC | 21 Aug 23 10:43 UTC |
	|                | --namespace=default --https          |                             |         |         |                     |                     |
	|                | --url hello-node                     |                             |         |         |                     |                     |
	| service        | functional-923429                    | functional-923429           | jenkins | v1.31.2 | 21 Aug 23 10:43 UTC | 21 Aug 23 10:43 UTC |
	|                | service hello-node --url             |                             |         |         |                     |                     |
	|                | --format={{.IP}}                     |                             |         |         |                     |                     |
	| service        | functional-923429 service            | functional-923429           | jenkins | v1.31.2 | 21 Aug 23 10:43 UTC | 21 Aug 23 10:43 UTC |
	|                | hello-node --url                     |                             |         |         |                     |                     |
	| delete         | -p functional-923429                 | functional-923429           | jenkins | v1.31.2 | 21 Aug 23 10:43 UTC | 21 Aug 23 10:43 UTC |
	| start          | -p ingress-addon-legacy-218089       | ingress-addon-legacy-218089 | jenkins | v1.31.2 | 21 Aug 23 10:43 UTC | 21 Aug 23 10:44 UTC |
	|                | --kubernetes-version=v1.18.20        |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true            |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                 |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-218089          | ingress-addon-legacy-218089 | jenkins | v1.31.2 | 21 Aug 23 10:44 UTC | 21 Aug 23 10:44 UTC |
	|                | addons enable ingress                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-218089          | ingress-addon-legacy-218089 | jenkins | v1.31.2 | 21 Aug 23 10:44 UTC | 21 Aug 23 10:44 UTC |
	|                | addons enable ingress-dns            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5               |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-218089          | ingress-addon-legacy-218089 | jenkins | v1.31.2 | 21 Aug 23 10:45 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/        |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'         |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-218089 ip       | ingress-addon-legacy-218089 | jenkins | v1.31.2 | 21 Aug 23 10:47 UTC | 21 Aug 23 10:47 UTC |
	| addons         | ingress-addon-legacy-218089          | ingress-addon-legacy-218089 | jenkins | v1.31.2 | 21 Aug 23 10:47 UTC | 21 Aug 23 10:47 UTC |
	|                | addons disable ingress-dns           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-218089          | ingress-addon-legacy-218089 | jenkins | v1.31.2 | 21 Aug 23 10:47 UTC | 21 Aug 23 10:47 UTC |
	|                | addons disable ingress               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/21 10:43:14
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0821 10:43:14.520861   52103 out.go:296] Setting OutFile to fd 1 ...
	I0821 10:43:14.521008   52103 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 10:43:14.521021   52103 out.go:309] Setting ErrFile to fd 2...
	I0821 10:43:14.521029   52103 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 10:43:14.521242   52103 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-5717/.minikube/bin
	I0821 10:43:14.521855   52103 out.go:303] Setting JSON to false
	I0821 10:43:14.522994   52103 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1544,"bootTime":1692613050,"procs":369,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0821 10:43:14.523063   52103 start.go:138] virtualization: kvm guest
	I0821 10:43:14.526138   52103 out.go:177] * [ingress-addon-legacy-218089] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0821 10:43:14.527538   52103 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 10:43:14.528996   52103 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 10:43:14.527598   52103 notify.go:220] Checking for updates...
	I0821 10:43:14.530777   52103 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17102-5717/kubeconfig
	I0821 10:43:14.532252   52103 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-5717/.minikube
	I0821 10:43:14.533860   52103 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0821 10:43:14.535253   52103 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 10:43:14.536782   52103 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 10:43:14.557273   52103 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0821 10:43:14.557346   52103 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 10:43:14.609521   52103 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:37 SystemTime:2023-08-21 10:43:14.600448383 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0821 10:43:14.609621   52103 docker.go:294] overlay module found
	I0821 10:43:14.611675   52103 out.go:177] * Using the docker driver based on user configuration
	I0821 10:43:14.613062   52103 start.go:298] selected driver: docker
	I0821 10:43:14.613075   52103 start.go:902] validating driver "docker" against <nil>
	I0821 10:43:14.613088   52103 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 10:43:14.613842   52103 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 10:43:14.664270   52103 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:37 SystemTime:2023-08-21 10:43:14.655766291 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0821 10:43:14.664440   52103 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0821 10:43:14.664624   52103 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0821 10:43:14.666684   52103 out.go:177] * Using Docker driver with root privileges
	I0821 10:43:14.668230   52103 cni.go:84] Creating CNI manager for ""
	I0821 10:43:14.668254   52103 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0821 10:43:14.668265   52103 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0821 10:43:14.668278   52103 start_flags.go:319] config:
	{Name:ingress-addon-legacy-218089 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-218089 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 10:43:14.669912   52103 out.go:177] * Starting control plane node ingress-addon-legacy-218089 in cluster ingress-addon-legacy-218089
	I0821 10:43:14.671436   52103 cache.go:122] Beginning downloading kic base image for docker with crio
	I0821 10:43:14.672933   52103 out.go:177] * Pulling base image ...
	I0821 10:43:14.674305   52103 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0821 10:43:14.674334   52103 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0821 10:43:14.690387   52103 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0821 10:43:14.690415   52103 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0821 10:43:14.698383   52103 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0821 10:43:14.698407   52103 cache.go:57] Caching tarball of preloaded images
	I0821 10:43:14.698557   52103 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0821 10:43:14.700471   52103 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0821 10:43:14.701934   52103 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0821 10:43:14.732194   52103 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17102-5717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0821 10:43:19.395757   52103 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0821 10:43:19.395847   52103 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17102-5717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0821 10:43:20.343452   52103 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0821 10:43:20.343780   52103 profile.go:148] Saving config to /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/config.json ...
	I0821 10:43:20.343807   52103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/config.json: {Name:mk5de9070734ffd304b51bfbfbf35d28070cf738 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 10:43:20.343992   52103 cache.go:195] Successfully downloaded all kic artifacts
	I0821 10:43:20.344017   52103 start.go:365] acquiring machines lock for ingress-addon-legacy-218089: {Name:mk578d0cb0fd2e6b97999458304dc9d7bb309b14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 10:43:20.344057   52103 start.go:369] acquired machines lock for "ingress-addon-legacy-218089" in 30.696µs
	I0821 10:43:20.344076   52103 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-218089 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-218089 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0821 10:43:20.344143   52103 start.go:125] createHost starting for "" (driver="docker")
	I0821 10:43:20.346157   52103 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0821 10:43:20.346385   52103 start.go:159] libmachine.API.Create for "ingress-addon-legacy-218089" (driver="docker")
	I0821 10:43:20.346414   52103 client.go:168] LocalClient.Create starting
	I0821 10:43:20.346486   52103 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem
	I0821 10:43:20.346518   52103 main.go:141] libmachine: Decoding PEM data...
	I0821 10:43:20.346536   52103 main.go:141] libmachine: Parsing certificate...
	I0821 10:43:20.346585   52103 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17102-5717/.minikube/certs/cert.pem
	I0821 10:43:20.346606   52103 main.go:141] libmachine: Decoding PEM data...
	I0821 10:43:20.346614   52103 main.go:141] libmachine: Parsing certificate...
	I0821 10:43:20.346887   52103 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-218089 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0821 10:43:20.362762   52103 cli_runner.go:211] docker network inspect ingress-addon-legacy-218089 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0821 10:43:20.362835   52103 network_create.go:281] running [docker network inspect ingress-addon-legacy-218089] to gather additional debugging logs...
	I0821 10:43:20.362854   52103 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-218089
	W0821 10:43:20.379386   52103 cli_runner.go:211] docker network inspect ingress-addon-legacy-218089 returned with exit code 1
	I0821 10:43:20.379419   52103 network_create.go:284] error running [docker network inspect ingress-addon-legacy-218089]: docker network inspect ingress-addon-legacy-218089: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-218089 not found
	I0821 10:43:20.379434   52103 network_create.go:286] output of [docker network inspect ingress-addon-legacy-218089]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-218089 not found
	
	** /stderr **
	I0821 10:43:20.379485   52103 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0821 10:43:20.395034   52103 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0012c6310}
	I0821 10:43:20.395066   52103 network_create.go:123] attempt to create docker network ingress-addon-legacy-218089 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0821 10:43:20.395118   52103 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-218089 ingress-addon-legacy-218089
	I0821 10:43:20.443809   52103 network_create.go:107] docker network ingress-addon-legacy-218089 192.168.49.0/24 created
	I0821 10:43:20.443839   52103 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-218089" container
	I0821 10:43:20.443895   52103 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0821 10:43:20.458796   52103 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-218089 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-218089 --label created_by.minikube.sigs.k8s.io=true
	I0821 10:43:20.475126   52103 oci.go:103] Successfully created a docker volume ingress-addon-legacy-218089
	I0821 10:43:20.475217   52103 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-218089-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-218089 --entrypoint /usr/bin/test -v ingress-addon-legacy-218089:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0821 10:43:22.235399   52103 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-218089-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-218089 --entrypoint /usr/bin/test -v ingress-addon-legacy-218089:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib: (1.760089472s)
	I0821 10:43:22.235429   52103 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-218089
	I0821 10:43:22.235546   52103 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0821 10:43:22.235578   52103 kic.go:190] Starting extracting preloaded images to volume ...
	I0821 10:43:22.235649   52103 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17102-5717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-218089:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0821 10:43:27.521536   52103 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17102-5717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-218089:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (5.285843735s)
	I0821 10:43:27.521571   52103 kic.go:199] duration metric: took 5.285990 seconds to extract preloaded images to volume
	W0821 10:43:27.521703   52103 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0821 10:43:27.521805   52103 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0821 10:43:27.570121   52103 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-218089 --name ingress-addon-legacy-218089 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-218089 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-218089 --network ingress-addon-legacy-218089 --ip 192.168.49.2 --volume ingress-addon-legacy-218089:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0821 10:43:27.868389   52103 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-218089 --format={{.State.Running}}
	I0821 10:43:27.885087   52103 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-218089 --format={{.State.Status}}
	I0821 10:43:27.902367   52103 cli_runner.go:164] Run: docker exec ingress-addon-legacy-218089 stat /var/lib/dpkg/alternatives/iptables
	I0821 10:43:27.963114   52103 oci.go:144] the created container "ingress-addon-legacy-218089" has a running status.
	I0821 10:43:27.963142   52103 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17102-5717/.minikube/machines/ingress-addon-legacy-218089/id_rsa...
	I0821 10:43:28.122823   52103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/machines/ingress-addon-legacy-218089/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0821 10:43:28.122865   52103 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17102-5717/.minikube/machines/ingress-addon-legacy-218089/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0821 10:43:28.143246   52103 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-218089 --format={{.State.Status}}
	I0821 10:43:28.158333   52103 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0821 10:43:28.158353   52103 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-218089 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0821 10:43:28.229538   52103 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-218089 --format={{.State.Status}}
	I0821 10:43:28.248367   52103 machine.go:88] provisioning docker machine ...
	I0821 10:43:28.248409   52103 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-218089"
	I0821 10:43:28.248464   52103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-218089
	I0821 10:43:28.273050   52103 main.go:141] libmachine: Using SSH client type: native
	I0821 10:43:28.273559   52103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0821 10:43:28.273582   52103 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-218089 && echo "ingress-addon-legacy-218089" | sudo tee /etc/hostname
	I0821 10:43:28.274291   52103 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0821 10:43:31.409096   52103 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-218089
	
	I0821 10:43:31.409193   52103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-218089
	I0821 10:43:31.425420   52103 main.go:141] libmachine: Using SSH client type: native
	I0821 10:43:31.426014   52103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0821 10:43:31.426045   52103 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-218089' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-218089/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-218089' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0821 10:43:31.547065   52103 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0821 10:43:31.547095   52103 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17102-5717/.minikube CaCertPath:/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17102-5717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17102-5717/.minikube}
	I0821 10:43:31.547121   52103 ubuntu.go:177] setting up certificates
	I0821 10:43:31.547131   52103 provision.go:83] configureAuth start
	I0821 10:43:31.547189   52103 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-218089
	I0821 10:43:31.563326   52103 provision.go:138] copyHostCerts
	I0821 10:43:31.563376   52103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17102-5717/.minikube/ca.pem
	I0821 10:43:31.563410   52103 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-5717/.minikube/ca.pem, removing ...
	I0821 10:43:31.563422   52103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-5717/.minikube/ca.pem
	I0821 10:43:31.563515   52103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17102-5717/.minikube/ca.pem (1078 bytes)
	I0821 10:43:31.563603   52103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17102-5717/.minikube/cert.pem
	I0821 10:43:31.563629   52103 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-5717/.minikube/cert.pem, removing ...
	I0821 10:43:31.563636   52103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-5717/.minikube/cert.pem
	I0821 10:43:31.563679   52103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17102-5717/.minikube/cert.pem (1123 bytes)
	I0821 10:43:31.563740   52103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17102-5717/.minikube/key.pem
	I0821 10:43:31.563762   52103 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-5717/.minikube/key.pem, removing ...
	I0821 10:43:31.563771   52103 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-5717/.minikube/key.pem
	I0821 10:43:31.563803   52103 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17102-5717/.minikube/key.pem (1675 bytes)
	I0821 10:43:31.563869   52103 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17102-5717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-218089 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-218089]
	I0821 10:43:31.778573   52103 provision.go:172] copyRemoteCerts
	I0821 10:43:31.778652   52103 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0821 10:43:31.778700   52103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-218089
	I0821 10:43:31.794508   52103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/ingress-addon-legacy-218089/id_rsa Username:docker}
	I0821 10:43:31.883261   52103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0821 10:43:31.883334   52103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0821 10:43:31.903244   52103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0821 10:43:31.903304   52103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0821 10:43:31.922607   52103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0821 10:43:31.922671   52103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0821 10:43:31.941727   52103 provision.go:86] duration metric: configureAuth took 394.582996ms
	I0821 10:43:31.941754   52103 ubuntu.go:193] setting minikube options for container-runtime
	I0821 10:43:31.941914   52103 config.go:182] Loaded profile config "ingress-addon-legacy-218089": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0821 10:43:31.942021   52103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-218089
	I0821 10:43:31.957651   52103 main.go:141] libmachine: Using SSH client type: native
	I0821 10:43:31.958076   52103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0821 10:43:31.958097   52103 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0821 10:43:32.187209   52103 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0821 10:43:32.187238   52103 machine.go:91] provisioned docker machine in 3.938843746s
	I0821 10:43:32.187246   52103 client.go:171] LocalClient.Create took 11.840824776s
	I0821 10:43:32.187263   52103 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-218089" took 11.840876619s
	I0821 10:43:32.187270   52103 start.go:300] post-start starting for "ingress-addon-legacy-218089" (driver="docker")
	I0821 10:43:32.187278   52103 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0821 10:43:32.187332   52103 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0821 10:43:32.187387   52103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-218089
	I0821 10:43:32.203587   52103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/ingress-addon-legacy-218089/id_rsa Username:docker}
	I0821 10:43:32.295484   52103 ssh_runner.go:195] Run: cat /etc/os-release
	I0821 10:43:32.298276   52103 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0821 10:43:32.298307   52103 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0821 10:43:32.298324   52103 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0821 10:43:32.298335   52103 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0821 10:43:32.298348   52103 filesync.go:126] Scanning /home/jenkins/minikube-integration/17102-5717/.minikube/addons for local assets ...
	I0821 10:43:32.298410   52103 filesync.go:126] Scanning /home/jenkins/minikube-integration/17102-5717/.minikube/files for local assets ...
	I0821 10:43:32.298502   52103 filesync.go:149] local asset: /home/jenkins/minikube-integration/17102-5717/.minikube/files/etc/ssl/certs/124602.pem -> 124602.pem in /etc/ssl/certs
	I0821 10:43:32.298512   52103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/files/etc/ssl/certs/124602.pem -> /etc/ssl/certs/124602.pem
	I0821 10:43:32.298618   52103 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0821 10:43:32.305694   52103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/files/etc/ssl/certs/124602.pem --> /etc/ssl/certs/124602.pem (1708 bytes)
	I0821 10:43:32.324993   52103 start.go:303] post-start completed in 137.712046ms
	I0821 10:43:32.325332   52103 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-218089
	I0821 10:43:32.340934   52103 profile.go:148] Saving config to /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/config.json ...
	I0821 10:43:32.341170   52103 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0821 10:43:32.341215   52103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-218089
	I0821 10:43:32.356559   52103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/ingress-addon-legacy-218089/id_rsa Username:docker}
	I0821 10:43:32.443940   52103 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0821 10:43:32.447731   52103 start.go:128] duration metric: createHost completed in 12.103579097s
	I0821 10:43:32.447747   52103 start.go:83] releasing machines lock for "ingress-addon-legacy-218089", held for 12.103680034s
	I0821 10:43:32.447797   52103 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-218089
	I0821 10:43:32.463451   52103 ssh_runner.go:195] Run: cat /version.json
	I0821 10:43:32.463510   52103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-218089
	I0821 10:43:32.463529   52103 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0821 10:43:32.463601   52103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-218089
	I0821 10:43:32.481788   52103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/ingress-addon-legacy-218089/id_rsa Username:docker}
	I0821 10:43:32.481977   52103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/ingress-addon-legacy-218089/id_rsa Username:docker}
	I0821 10:43:32.650905   52103 ssh_runner.go:195] Run: systemctl --version
	I0821 10:43:32.654853   52103 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0821 10:43:32.789241   52103 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0821 10:43:32.793237   52103 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0821 10:43:32.810193   52103 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0821 10:43:32.810268   52103 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0821 10:43:32.835849   52103 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0821 10:43:32.835875   52103 start.go:466] detecting cgroup driver to use...
	I0821 10:43:32.835909   52103 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0821 10:43:32.835967   52103 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0821 10:43:32.849476   52103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0821 10:43:32.859093   52103 docker.go:196] disabling cri-docker service (if available) ...
	I0821 10:43:32.859155   52103 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0821 10:43:32.870731   52103 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0821 10:43:32.882877   52103 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0821 10:43:32.955968   52103 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0821 10:43:33.036055   52103 docker.go:212] disabling docker service ...
	I0821 10:43:33.036114   52103 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0821 10:43:33.052459   52103 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0821 10:43:33.062249   52103 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0821 10:43:33.135656   52103 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0821 10:43:33.216330   52103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0821 10:43:33.226282   52103 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0821 10:43:33.239897   52103 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0821 10:43:33.239972   52103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 10:43:33.248486   52103 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0821 10:43:33.248534   52103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 10:43:33.256680   52103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 10:43:33.264803   52103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 10:43:33.273099   52103 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0821 10:43:33.280509   52103 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0821 10:43:33.287178   52103 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0821 10:43:33.293854   52103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 10:43:33.363682   52103 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0821 10:43:33.459307   52103 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0821 10:43:33.459396   52103 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0821 10:43:33.463253   52103 start.go:534] Will wait 60s for crictl version
	I0821 10:43:33.463292   52103 ssh_runner.go:195] Run: which crictl
	I0821 10:43:33.466171   52103 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0821 10:43:33.495779   52103 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0821 10:43:33.495847   52103 ssh_runner.go:195] Run: crio --version
	I0821 10:43:33.526925   52103 ssh_runner.go:195] Run: crio --version
	I0821 10:43:33.559568   52103 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I0821 10:43:33.560973   52103 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-218089 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0821 10:43:33.577192   52103 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0821 10:43:33.580695   52103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0821 10:43:33.590498   52103 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0821 10:43:33.590555   52103 ssh_runner.go:195] Run: sudo crictl images --output json
	I0821 10:43:33.633648   52103 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0821 10:43:33.633728   52103 ssh_runner.go:195] Run: which lz4
	I0821 10:43:33.636861   52103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0821 10:43:33.636942   52103 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0821 10:43:33.639823   52103 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0821 10:43:33.639846   52103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0821 10:43:34.524215   52103 crio.go:444] Took 0.887306 seconds to copy over tarball
	I0821 10:43:34.524288   52103 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0821 10:43:36.689754   52103 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.165435854s)
	I0821 10:43:36.689781   52103 crio.go:451] Took 2.165545 seconds to extract the tarball
	I0821 10:43:36.689791   52103 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0821 10:43:36.758681   52103 ssh_runner.go:195] Run: sudo crictl images --output json
	I0821 10:43:36.789116   52103 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0821 10:43:36.789141   52103 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0821 10:43:36.789216   52103 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0821 10:43:36.789263   52103 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0821 10:43:36.789270   52103 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0821 10:43:36.789281   52103 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0821 10:43:36.789321   52103 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0821 10:43:36.789206   52103 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0821 10:43:36.789246   52103 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0821 10:43:36.789251   52103 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0821 10:43:36.790515   52103 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0821 10:43:36.790533   52103 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0821 10:43:36.790538   52103 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0821 10:43:36.790533   52103 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0821 10:43:36.790604   52103 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0821 10:43:36.790770   52103 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0821 10:43:36.790794   52103 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0821 10:43:36.790803   52103 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0821 10:43:36.970279   52103 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0821 10:43:36.984183   52103 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0821 10:43:36.989885   52103 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0821 10:43:36.990045   52103 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0821 10:43:37.008018   52103 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0821 10:43:37.008057   52103 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0821 10:43:37.008097   52103 ssh_runner.go:195] Run: which crictl
	I0821 10:43:37.013247   52103 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0821 10:43:37.022298   52103 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0821 10:43:37.024407   52103 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0821 10:43:37.024456   52103 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0821 10:43:37.024506   52103 ssh_runner.go:195] Run: which crictl
	I0821 10:43:37.043985   52103 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0821 10:43:37.044028   52103 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0821 10:43:37.044078   52103 ssh_runner.go:195] Run: which crictl
	I0821 10:43:37.050766   52103 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0821 10:43:37.123441   52103 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0821 10:43:37.143298   52103 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0821 10:43:37.143346   52103 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0821 10:43:37.143396   52103 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0821 10:43:37.143438   52103 ssh_runner.go:195] Run: which crictl
	I0821 10:43:37.143464   52103 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0821 10:43:37.143512   52103 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0821 10:43:37.143514   52103 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0821 10:43:37.143585   52103 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0821 10:43:37.143590   52103 ssh_runner.go:195] Run: which crictl
	I0821 10:43:37.143711   52103 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0821 10:43:37.143738   52103 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0821 10:43:37.143772   52103 ssh_runner.go:195] Run: which crictl
	I0821 10:43:37.167156   52103 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0821 10:43:37.167199   52103 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0821 10:43:37.167240   52103 ssh_runner.go:195] Run: which crictl
	I0821 10:43:37.181961   52103 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0821 10:43:37.182716   52103 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0821 10:43:37.182760   52103 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0821 10:43:37.182856   52103 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0821 10:43:37.182948   52103 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0821 10:43:37.182974   52103 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0821 10:43:37.183051   52103 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0821 10:43:37.269480   52103 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0821 10:43:37.281001   52103 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0821 10:43:37.281073   52103 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0821 10:43:37.283337   52103 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0821 10:43:37.283391   52103 cache_images.go:92] LoadImages completed in 494.233395ms
	W0821 10:43:37.283449   52103 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I0821 10:43:37.283507   52103 ssh_runner.go:195] Run: crio config
	I0821 10:43:37.371988   52103 cni.go:84] Creating CNI manager for ""
	I0821 10:43:37.372010   52103 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0821 10:43:37.372051   52103 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0821 10:43:37.372078   52103 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-218089 NodeName:ingress-addon-legacy-218089 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0821 10:43:37.372212   52103 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-218089"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0821 10:43:37.372307   52103 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-218089 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-218089 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0821 10:43:37.372354   52103 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0821 10:43:37.380085   52103 binaries.go:44] Found k8s binaries, skipping transfer
	I0821 10:43:37.380136   52103 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0821 10:43:37.387399   52103 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0821 10:43:37.402060   52103 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0821 10:43:37.417327   52103 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0821 10:43:37.432274   52103 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0821 10:43:37.435199   52103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0821 10:43:37.444075   52103 certs.go:56] Setting up /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089 for IP: 192.168.49.2
	I0821 10:43:37.444104   52103 certs.go:190] acquiring lock for shared ca certs: {Name:mkb88db7eb1befc1f1b3279575458c71b2313cc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 10:43:37.444230   52103 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17102-5717/.minikube/ca.key
	I0821 10:43:37.444295   52103 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17102-5717/.minikube/proxy-client-ca.key
	I0821 10:43:37.444356   52103 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/client.key
	I0821 10:43:37.444371   52103 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/client.crt with IP's: []
	I0821 10:43:37.810336   52103 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/client.crt ...
	I0821 10:43:37.810376   52103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/client.crt: {Name:mk3ad3d85d57a57adb2dde506c5866bbf916cfdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 10:43:37.810566   52103 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/client.key ...
	I0821 10:43:37.810588   52103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/client.key: {Name:mka67dfa4e632b3d42bc86643b44eeb8b91bc7a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 10:43:37.810712   52103 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/apiserver.key.dd3b5fb2
	I0821 10:43:37.810734   52103 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0821 10:43:37.938676   52103 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/apiserver.crt.dd3b5fb2 ...
	I0821 10:43:37.938708   52103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/apiserver.crt.dd3b5fb2: {Name:mk07b7ab745c94c02edf8c8ce9b5a882fa86187c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 10:43:37.938876   52103 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/apiserver.key.dd3b5fb2 ...
	I0821 10:43:37.938890   52103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/apiserver.key.dd3b5fb2: {Name:mk738bf1d3bc798fb754ee0dd6dfb5ad9bb10868 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 10:43:37.938987   52103 certs.go:337] copying /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/apiserver.crt
	I0821 10:43:37.939086   52103 certs.go:341] copying /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/apiserver.key
	I0821 10:43:37.939173   52103 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/proxy-client.key
	I0821 10:43:37.939196   52103 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/proxy-client.crt with IP's: []
	I0821 10:43:38.108993   52103 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/proxy-client.crt ...
	I0821 10:43:38.109024   52103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/proxy-client.crt: {Name:mk9e5caf9b30fce5caf225ac9ce174bca9313ff9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 10:43:38.109195   52103 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/proxy-client.key ...
	I0821 10:43:38.109209   52103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/proxy-client.key: {Name:mk75546f92d41418da42252fd21fe19521cd4496 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 10:43:38.109311   52103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0821 10:43:38.109341   52103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0821 10:43:38.109362   52103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0821 10:43:38.109381   52103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0821 10:43:38.109397   52103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0821 10:43:38.109412   52103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0821 10:43:38.109429   52103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0821 10:43:38.109444   52103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0821 10:43:38.109508   52103 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/home/jenkins/minikube-integration/17102-5717/.minikube/certs/12460.pem (1338 bytes)
	W0821 10:43:38.109562   52103 certs.go:433] ignoring /home/jenkins/minikube-integration/17102-5717/.minikube/certs/home/jenkins/minikube-integration/17102-5717/.minikube/certs/12460_empty.pem, impossibly tiny 0 bytes
	I0821 10:43:38.109583   52103 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca-key.pem (1679 bytes)
	I0821 10:43:38.109619   52103 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem (1078 bytes)
	I0821 10:43:38.109653   52103 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/home/jenkins/minikube-integration/17102-5717/.minikube/certs/cert.pem (1123 bytes)
	I0821 10:43:38.109690   52103 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/home/jenkins/minikube-integration/17102-5717/.minikube/certs/key.pem (1675 bytes)
	I0821 10:43:38.109754   52103 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-5717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17102-5717/.minikube/files/etc/ssl/certs/124602.pem (1708 bytes)
	I0821 10:43:38.109799   52103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0821 10:43:38.109819   52103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/12460.pem -> /usr/share/ca-certificates/12460.pem
	I0821 10:43:38.109836   52103 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/files/etc/ssl/certs/124602.pem -> /usr/share/ca-certificates/124602.pem
	I0821 10:43:38.110413   52103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0821 10:43:38.131223   52103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0821 10:43:38.150921   52103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0821 10:43:38.170455   52103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0821 10:43:38.190752   52103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0821 10:43:38.210702   52103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0821 10:43:38.231626   52103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0821 10:43:38.251652   52103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0821 10:43:38.271942   52103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0821 10:43:38.291948   52103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/certs/12460.pem --> /usr/share/ca-certificates/12460.pem (1338 bytes)
	I0821 10:43:38.311562   52103 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/files/etc/ssl/certs/124602.pem --> /usr/share/ca-certificates/124602.pem (1708 bytes)
	I0821 10:43:38.331697   52103 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0821 10:43:38.346484   52103 ssh_runner.go:195] Run: openssl version
	I0821 10:43:38.351285   52103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0821 10:43:38.359028   52103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0821 10:43:38.361946   52103 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 21 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0821 10:43:38.361999   52103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0821 10:43:38.367814   52103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0821 10:43:38.375671   52103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12460.pem && ln -fs /usr/share/ca-certificates/12460.pem /etc/ssl/certs/12460.pem"
	I0821 10:43:38.383478   52103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12460.pem
	I0821 10:43:38.386447   52103 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 21 10:39 /usr/share/ca-certificates/12460.pem
	I0821 10:43:38.386496   52103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12460.pem
	I0821 10:43:38.392355   52103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12460.pem /etc/ssl/certs/51391683.0"
	I0821 10:43:38.400181   52103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/124602.pem && ln -fs /usr/share/ca-certificates/124602.pem /etc/ssl/certs/124602.pem"
	I0821 10:43:38.407939   52103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/124602.pem
	I0821 10:43:38.410946   52103 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 21 10:39 /usr/share/ca-certificates/124602.pem
	I0821 10:43:38.410994   52103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/124602.pem
	I0821 10:43:38.416903   52103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/124602.pem /etc/ssl/certs/3ec20f2e.0"
	I0821 10:43:38.424479   52103 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0821 10:43:38.427119   52103 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0821 10:43:38.427170   52103 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-218089 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-218089 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMet
rics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 10:43:38.427255   52103 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0821 10:43:38.427304   52103 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0821 10:43:38.460128   52103 cri.go:89] found id: ""
	I0821 10:43:38.460211   52103 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0821 10:43:38.468046   52103 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0821 10:43:38.475610   52103 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0821 10:43:38.475686   52103 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0821 10:43:38.483042   52103 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0821 10:43:38.483081   52103 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0821 10:43:38.524825   52103 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0821 10:43:38.524884   52103 kubeadm.go:322] [preflight] Running pre-flight checks
	I0821 10:43:38.560540   52103 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0821 10:43:38.560664   52103 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1039-gcp
	I0821 10:43:38.560736   52103 kubeadm.go:322] OS: Linux
	I0821 10:43:38.560798   52103 kubeadm.go:322] CGROUPS_CPU: enabled
	I0821 10:43:38.560862   52103 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0821 10:43:38.560930   52103 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0821 10:43:38.560985   52103 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0821 10:43:38.561027   52103 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0821 10:43:38.561073   52103 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0821 10:43:38.624504   52103 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0821 10:43:38.624624   52103 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0821 10:43:38.624723   52103 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0821 10:43:38.798117   52103 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0821 10:43:38.799040   52103 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0821 10:43:38.799104   52103 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0821 10:43:38.875184   52103 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0821 10:43:38.877563   52103 out.go:204]   - Generating certificates and keys ...
	I0821 10:43:38.877711   52103 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0821 10:43:38.877843   52103 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0821 10:43:39.010734   52103 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0821 10:43:39.317163   52103 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0821 10:43:39.544462   52103 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0821 10:43:39.625282   52103 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0821 10:43:39.704286   52103 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0821 10:43:39.704478   52103 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-218089 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0821 10:43:39.767182   52103 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0821 10:43:39.767407   52103 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-218089 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0821 10:43:39.869297   52103 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0821 10:43:40.229671   52103 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0821 10:43:40.369704   52103 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0821 10:43:40.369823   52103 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0821 10:43:40.493769   52103 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0821 10:43:40.623541   52103 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0821 10:43:40.800848   52103 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0821 10:43:40.907763   52103 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0821 10:43:40.908484   52103 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0821 10:43:40.910621   52103 out.go:204]   - Booting up control plane ...
	I0821 10:43:40.910725   52103 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0821 10:43:40.914992   52103 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0821 10:43:40.916048   52103 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0821 10:43:40.916833   52103 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0821 10:43:40.918694   52103 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0821 10:43:46.920814   52103 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.002090 seconds
	I0821 10:43:46.920973   52103 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0821 10:43:46.931278   52103 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0821 10:43:47.445346   52103 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0821 10:43:47.445547   52103 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-218089 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0821 10:43:47.952815   52103 kubeadm.go:322] [bootstrap-token] Using token: 8tcayb.5x6zen0r1gis5k9z
	I0821 10:43:47.954309   52103 out.go:204]   - Configuring RBAC rules ...
	I0821 10:43:47.954442   52103 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0821 10:43:47.957964   52103 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0821 10:43:47.964632   52103 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0821 10:43:47.966319   52103 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0821 10:43:47.967974   52103 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0821 10:43:47.969690   52103 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0821 10:43:47.975809   52103 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0821 10:43:48.179823   52103 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0821 10:43:48.364692   52103 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0821 10:43:48.365819   52103 kubeadm.go:322] 
	I0821 10:43:48.365886   52103 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0821 10:43:48.365909   52103 kubeadm.go:322] 
	I0821 10:43:48.366018   52103 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0821 10:43:48.366035   52103 kubeadm.go:322] 
	I0821 10:43:48.366087   52103 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0821 10:43:48.366162   52103 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0821 10:43:48.366247   52103 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0821 10:43:48.366257   52103 kubeadm.go:322] 
	I0821 10:43:48.366327   52103 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0821 10:43:48.366425   52103 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0821 10:43:48.366517   52103 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0821 10:43:48.366527   52103 kubeadm.go:322] 
	I0821 10:43:48.366622   52103 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0821 10:43:48.366736   52103 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0821 10:43:48.366744   52103 kubeadm.go:322] 
	I0821 10:43:48.366851   52103 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 8tcayb.5x6zen0r1gis5k9z \
	I0821 10:43:48.366989   52103 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:a6ae141b3a3795878aa14999e04688399a9a305fa66151b732d0ee2f32cf9691 \
	I0821 10:43:48.367025   52103 kubeadm.go:322]     --control-plane 
	I0821 10:43:48.367035   52103 kubeadm.go:322] 
	I0821 10:43:48.367164   52103 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0821 10:43:48.367181   52103 kubeadm.go:322] 
	I0821 10:43:48.367286   52103 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 8tcayb.5x6zen0r1gis5k9z \
	I0821 10:43:48.367448   52103 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:a6ae141b3a3795878aa14999e04688399a9a305fa66151b732d0ee2f32cf9691 
	I0821 10:43:48.368878   52103 kubeadm.go:322] W0821 10:43:38.524374    1377 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0821 10:43:48.369110   52103 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1039-gcp\n", err: exit status 1
	I0821 10:43:48.369204   52103 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0821 10:43:48.369354   52103 kubeadm.go:322] W0821 10:43:40.914728    1377 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0821 10:43:48.369502   52103 kubeadm.go:322] W0821 10:43:40.915799    1377 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0821 10:43:48.369523   52103 cni.go:84] Creating CNI manager for ""
	I0821 10:43:48.369533   52103 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0821 10:43:48.371116   52103 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0821 10:43:48.372376   52103 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0821 10:43:48.375927   52103 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0821 10:43:48.375942   52103 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0821 10:43:48.390807   52103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0821 10:43:48.779371   52103 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0821 10:43:48.779457   52103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:43:48.779458   52103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43 minikube.k8s.io/name=ingress-addon-legacy-218089 minikube.k8s.io/updated_at=2023_08_21T10_43_48_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:43:48.786438   52103 ops.go:34] apiserver oom_adj: -16
	I0821 10:43:48.880285   52103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:43:48.969313   52103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:43:49.538681   52103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:43:50.038440   52103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:43:50.538879   52103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:43:51.038985   52103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:43:51.538147   52103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:43:52.038214   52103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:43:52.538670   52103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:43:53.038712   52103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:43:53.538657   52103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:43:54.038233   52103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:43:54.539078   52103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:43:55.038229   52103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:43:55.538734   52103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:43:56.038223   52103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:43:56.538420   52103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:43:57.038863   52103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:43:57.538800   52103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:43:58.038647   52103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:43:58.538345   52103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:43:59.038781   52103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:43:59.538961   52103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:44:00.038887   52103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:44:00.538713   52103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:44:01.038169   52103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:44:01.538866   52103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:44:02.038754   52103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:44:02.538148   52103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:44:03.038960   52103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:44:03.539030   52103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:44:04.038143   52103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:44:04.539038   52103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:44:04.643919   52103 kubeadm.go:1081] duration metric: took 15.864520887s to wait for elevateKubeSystemPrivileges.
	I0821 10:44:04.643966   52103 kubeadm.go:406] StartCluster complete in 26.216798662s
	I0821 10:44:04.643987   52103 settings.go:142] acquiring lock: {Name:mkafc51d9ee0fb589973b887f0111ccc8fd1075b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 10:44:04.644058   52103 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17102-5717/kubeconfig
	I0821 10:44:04.645200   52103 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-5717/kubeconfig: {Name:mkb50cf560191d5f6ff2b436dd414f0b5471024e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 10:44:04.645442   52103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0821 10:44:04.645593   52103 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0821 10:44:04.645726   52103 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-218089"
	I0821 10:44:04.645725   52103 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-218089"
	I0821 10:44:04.645764   52103 config.go:182] Loaded profile config "ingress-addon-legacy-218089": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0821 10:44:04.645751   52103 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-218089"
	I0821 10:44:04.645770   52103 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-218089"
	I0821 10:44:04.645850   52103 host.go:66] Checking if "ingress-addon-legacy-218089" exists ...
	I0821 10:44:04.646139   52103 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-218089 --format={{.State.Status}}
	I0821 10:44:04.646170   52103 kapi.go:59] client config for ingress-addon-legacy-218089: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/client.crt", KeyFile:"/home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/client.key", CAFile:"/home/jenkins/minikube-integration/17102-5717/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d61e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0821 10:44:04.646426   52103 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-218089 --format={{.State.Status}}
	I0821 10:44:04.647158   52103 cert_rotation.go:137] Starting client certificate rotation controller
	I0821 10:44:04.666517   52103 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-218089" context rescaled to 1 replicas
	I0821 10:44:04.666550   52103 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0821 10:44:04.668115   52103 out.go:177] * Verifying Kubernetes components...
	I0821 10:44:04.670629   52103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0821 10:44:04.672785   52103 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0821 10:44:04.674189   52103 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0821 10:44:04.674207   52103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0821 10:44:04.674264   52103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-218089
	I0821 10:44:04.674375   52103 kapi.go:59] client config for ingress-addon-legacy-218089: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/client.crt", KeyFile:"/home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/client.key", CAFile:"/home/jenkins/minikube-integration/17102-5717/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d61e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0821 10:44:04.681172   52103 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-218089"
	I0821 10:44:04.681217   52103 host.go:66] Checking if "ingress-addon-legacy-218089" exists ...
	I0821 10:44:04.681697   52103 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-218089 --format={{.State.Status}}
	I0821 10:44:04.695397   52103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/ingress-addon-legacy-218089/id_rsa Username:docker}
	I0821 10:44:04.697328   52103 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0821 10:44:04.697349   52103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0821 10:44:04.697406   52103 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-218089
	I0821 10:44:04.712758   52103 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/ingress-addon-legacy-218089/id_rsa Username:docker}
	I0821 10:44:04.842481   52103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0821 10:44:04.843023   52103 kapi.go:59] client config for ingress-addon-legacy-218089: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/client.crt", KeyFile:"/home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/client.key", CAFile:"/home/jenkins/minikube-integration/17102-5717/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d61e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0821 10:44:04.843287   52103 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-218089" to be "Ready" ...
	I0821 10:44:04.935873   52103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0821 10:44:04.955432   52103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0821 10:44:05.081543   52103 start.go:901] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0821 10:44:05.260731   52103 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0821 10:44:05.261994   52103 addons.go:502] enable addons completed in 616.408561ms: enabled=[storage-provisioner default-storageclass]
	I0821 10:44:06.851672   52103 node_ready.go:58] node "ingress-addon-legacy-218089" has status "Ready":"False"
	I0821 10:44:08.851196   52103 node_ready.go:49] node "ingress-addon-legacy-218089" has status "Ready":"True"
	I0821 10:44:08.851218   52103 node_ready.go:38] duration metric: took 4.007914371s waiting for node "ingress-addon-legacy-218089" to be "Ready" ...
	I0821 10:44:08.851230   52103 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0821 10:44:08.857685   52103 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-67rfg" in "kube-system" namespace to be "Ready" ...
	I0821 10:44:10.864966   52103 pod_ready.go:102] pod "coredns-66bff467f8-67rfg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 10:44:04 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0821 10:44:12.867414   52103 pod_ready.go:102] pod "coredns-66bff467f8-67rfg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 10:44:04 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0821 10:44:15.367734   52103 pod_ready.go:102] pod "coredns-66bff467f8-67rfg" in "kube-system" namespace has status "Ready":"False"
	I0821 10:44:17.866656   52103 pod_ready.go:102] pod "coredns-66bff467f8-67rfg" in "kube-system" namespace has status "Ready":"False"
	I0821 10:44:20.368974   52103 pod_ready.go:102] pod "coredns-66bff467f8-67rfg" in "kube-system" namespace has status "Ready":"False"
	I0821 10:44:21.367009   52103 pod_ready.go:92] pod "coredns-66bff467f8-67rfg" in "kube-system" namespace has status "Ready":"True"
	I0821 10:44:21.367032   52103 pod_ready.go:81] duration metric: took 12.509323113s waiting for pod "coredns-66bff467f8-67rfg" in "kube-system" namespace to be "Ready" ...
	I0821 10:44:21.367043   52103 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-218089" in "kube-system" namespace to be "Ready" ...
	I0821 10:44:21.370730   52103 pod_ready.go:92] pod "etcd-ingress-addon-legacy-218089" in "kube-system" namespace has status "Ready":"True"
	I0821 10:44:21.370748   52103 pod_ready.go:81] duration metric: took 3.698225ms waiting for pod "etcd-ingress-addon-legacy-218089" in "kube-system" namespace to be "Ready" ...
	I0821 10:44:21.370757   52103 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-218089" in "kube-system" namespace to be "Ready" ...
	I0821 10:44:21.374498   52103 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-218089" in "kube-system" namespace has status "Ready":"True"
	I0821 10:44:21.374514   52103 pod_ready.go:81] duration metric: took 3.751852ms waiting for pod "kube-apiserver-ingress-addon-legacy-218089" in "kube-system" namespace to be "Ready" ...
	I0821 10:44:21.374522   52103 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-218089" in "kube-system" namespace to be "Ready" ...
	I0821 10:44:21.377937   52103 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-218089" in "kube-system" namespace has status "Ready":"True"
	I0821 10:44:21.377957   52103 pod_ready.go:81] duration metric: took 3.429195ms waiting for pod "kube-controller-manager-ingress-addon-legacy-218089" in "kube-system" namespace to be "Ready" ...
	I0821 10:44:21.377964   52103 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vbx9l" in "kube-system" namespace to be "Ready" ...
	I0821 10:44:21.381336   52103 pod_ready.go:92] pod "kube-proxy-vbx9l" in "kube-system" namespace has status "Ready":"True"
	I0821 10:44:21.381351   52103 pod_ready.go:81] duration metric: took 3.381874ms waiting for pod "kube-proxy-vbx9l" in "kube-system" namespace to be "Ready" ...
	I0821 10:44:21.381358   52103 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-218089" in "kube-system" namespace to be "Ready" ...
	I0821 10:44:21.562747   52103 request.go:629] Waited for 181.32753ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-218089
	I0821 10:44:21.762726   52103 request.go:629] Waited for 197.348404ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-218089
	I0821 10:44:21.765431   52103 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-218089" in "kube-system" namespace has status "Ready":"True"
	I0821 10:44:21.765454   52103 pod_ready.go:81] duration metric: took 384.089735ms waiting for pod "kube-scheduler-ingress-addon-legacy-218089" in "kube-system" namespace to be "Ready" ...
	I0821 10:44:21.765467   52103 pod_ready.go:38] duration metric: took 12.914226717s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0821 10:44:21.765485   52103 api_server.go:52] waiting for apiserver process to appear ...
	I0821 10:44:21.765585   52103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0821 10:44:21.775664   52103 api_server.go:72] duration metric: took 17.109078893s to wait for apiserver process to appear ...
	I0821 10:44:21.775687   52103 api_server.go:88] waiting for apiserver healthz status ...
	I0821 10:44:21.775706   52103 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0821 10:44:21.781260   52103 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0821 10:44:21.782078   52103 api_server.go:141] control plane version: v1.18.20
	I0821 10:44:21.782098   52103 api_server.go:131] duration metric: took 6.403624ms to wait for apiserver health ...
	I0821 10:44:21.782110   52103 system_pods.go:43] waiting for kube-system pods to appear ...
	I0821 10:44:21.962499   52103 request.go:629] Waited for 180.324162ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0821 10:44:21.967666   52103 system_pods.go:59] 8 kube-system pods found
	I0821 10:44:21.967693   52103 system_pods.go:61] "coredns-66bff467f8-67rfg" [b44b922b-5503-437d-9507-e6287ba5bb8e] Running
	I0821 10:44:21.967700   52103 system_pods.go:61] "etcd-ingress-addon-legacy-218089" [055aa5d8-3827-413d-8ff5-1dc5d90666f7] Running
	I0821 10:44:21.967705   52103 system_pods.go:61] "kindnet-gcdzd" [1e5840de-fc22-4499-8c0a-b3a8abaa876e] Running
	I0821 10:44:21.967712   52103 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-218089" [ae0b217a-b923-47b0-9dae-d17fc37fb979] Running
	I0821 10:44:21.967718   52103 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-218089" [b026ec16-af6d-4797-adba-3ed6c3748702] Running
	I0821 10:44:21.967723   52103 system_pods.go:61] "kube-proxy-vbx9l" [f8aed074-c6f0-4042-a00b-dfd74e35df2e] Running
	I0821 10:44:21.967733   52103 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-218089" [16057064-18fb-4276-88d6-e2d6bc56c0f7] Running
	I0821 10:44:21.967741   52103 system_pods.go:61] "storage-provisioner" [b70da810-a6db-401a-b64a-a61f48bf6ee9] Running
	I0821 10:44:21.967752   52103 system_pods.go:74] duration metric: took 185.636365ms to wait for pod list to return data ...
	I0821 10:44:21.967765   52103 default_sa.go:34] waiting for default service account to be created ...
	I0821 10:44:22.163235   52103 request.go:629] Waited for 195.370465ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0821 10:44:22.165594   52103 default_sa.go:45] found service account: "default"
	I0821 10:44:22.165620   52103 default_sa.go:55] duration metric: took 197.838118ms for default service account to be created ...
	I0821 10:44:22.165628   52103 system_pods.go:116] waiting for k8s-apps to be running ...
	I0821 10:44:22.363040   52103 request.go:629] Waited for 197.349543ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0821 10:44:22.367995   52103 system_pods.go:86] 8 kube-system pods found
	I0821 10:44:22.368020   52103 system_pods.go:89] "coredns-66bff467f8-67rfg" [b44b922b-5503-437d-9507-e6287ba5bb8e] Running
	I0821 10:44:22.368026   52103 system_pods.go:89] "etcd-ingress-addon-legacy-218089" [055aa5d8-3827-413d-8ff5-1dc5d90666f7] Running
	I0821 10:44:22.368030   52103 system_pods.go:89] "kindnet-gcdzd" [1e5840de-fc22-4499-8c0a-b3a8abaa876e] Running
	I0821 10:44:22.368034   52103 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-218089" [ae0b217a-b923-47b0-9dae-d17fc37fb979] Running
	I0821 10:44:22.368039   52103 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-218089" [b026ec16-af6d-4797-adba-3ed6c3748702] Running
	I0821 10:44:22.368043   52103 system_pods.go:89] "kube-proxy-vbx9l" [f8aed074-c6f0-4042-a00b-dfd74e35df2e] Running
	I0821 10:44:22.368047   52103 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-218089" [16057064-18fb-4276-88d6-e2d6bc56c0f7] Running
	I0821 10:44:22.368051   52103 system_pods.go:89] "storage-provisioner" [b70da810-a6db-401a-b64a-a61f48bf6ee9] Running
	I0821 10:44:22.368057   52103 system_pods.go:126] duration metric: took 202.425294ms to wait for k8s-apps to be running ...
	I0821 10:44:22.368075   52103 system_svc.go:44] waiting for kubelet service to be running ....
	I0821 10:44:22.368114   52103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0821 10:44:22.378526   52103 system_svc.go:56] duration metric: took 10.440942ms WaitForService to wait for kubelet.
	I0821 10:44:22.378549   52103 kubeadm.go:581] duration metric: took 17.711968008s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0821 10:44:22.378570   52103 node_conditions.go:102] verifying NodePressure condition ...
	I0821 10:44:22.562941   52103 request.go:629] Waited for 184.294983ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0821 10:44:22.565600   52103 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0821 10:44:22.565629   52103 node_conditions.go:123] node cpu capacity is 8
	I0821 10:44:22.565639   52103 node_conditions.go:105] duration metric: took 187.065365ms to run NodePressure ...
	I0821 10:44:22.565649   52103 start.go:228] waiting for startup goroutines ...
	I0821 10:44:22.565655   52103 start.go:233] waiting for cluster config update ...
	I0821 10:44:22.565666   52103 start.go:242] writing updated cluster config ...
	I0821 10:44:22.565920   52103 ssh_runner.go:195] Run: rm -f paused
	I0821 10:44:22.610937   52103 start.go:600] kubectl: 1.28.0, cluster: 1.18.20 (minor skew: 10)
	I0821 10:44:22.613354   52103 out.go:177] 
	W0821 10:44:22.615016   52103 out.go:239] ! /usr/local/bin/kubectl is version 1.28.0, which may have incompatibilities with Kubernetes 1.18.20.
	I0821 10:44:22.616436   52103 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0821 10:44:22.617800   52103 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-218089" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Aug 21 10:47:13 ingress-addon-legacy-218089 crio[960]: time="2023-08-21 10:47:13.072743449Z" level=info msg="Creating container: default/hello-world-app-5f5d8b66bb-c7bbd/hello-world-app" id=b74bb4f2-2f18-4daa-aa48-6d744ce52410 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 21 10:47:13 ingress-addon-legacy-218089 crio[960]: time="2023-08-21 10:47:13.072847534Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 21 10:47:13 ingress-addon-legacy-218089 crio[960]: time="2023-08-21 10:47:13.181926699Z" level=info msg="Created container 15fc184abc72309ed5636ed4d6e1507cbd878598e120f54a462c7ec15da5fc92: default/hello-world-app-5f5d8b66bb-c7bbd/hello-world-app" id=b74bb4f2-2f18-4daa-aa48-6d744ce52410 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 21 10:47:13 ingress-addon-legacy-218089 crio[960]: time="2023-08-21 10:47:13.182543381Z" level=info msg="Starting container: 15fc184abc72309ed5636ed4d6e1507cbd878598e120f54a462c7ec15da5fc92" id=5765b1ee-dd24-463b-bc31-944c8fc09f5b name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 21 10:47:13 ingress-addon-legacy-218089 crio[960]: time="2023-08-21 10:47:13.191306341Z" level=info msg="Started container" PID=4876 containerID=15fc184abc72309ed5636ed4d6e1507cbd878598e120f54a462c7ec15da5fc92 description=default/hello-world-app-5f5d8b66bb-c7bbd/hello-world-app id=5765b1ee-dd24-463b-bc31-944c8fc09f5b name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=bc2027128f6255bb69d016c4bd293eaccef077c72fcae2411459407a7709966f
	Aug 21 10:47:22 ingress-addon-legacy-218089 crio[960]: time="2023-08-21 10:47:22.604994367Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=17a0284f-1d75-4ce3-8fc8-5f08abe8a71c name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 21 10:47:28 ingress-addon-legacy-218089 crio[960]: time="2023-08-21 10:47:28.604734483Z" level=info msg="Stopping pod sandbox: 7e71e35ff25fabf58a7bea1364156a379211e15171ac8965ac39ba85891c9f0a" id=74a0aabb-9113-40c1-9fec-fd510dbad4cc name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 21 10:47:28 ingress-addon-legacy-218089 crio[960]: time="2023-08-21 10:47:28.605856688Z" level=info msg="Stopped pod sandbox: 7e71e35ff25fabf58a7bea1364156a379211e15171ac8965ac39ba85891c9f0a" id=74a0aabb-9113-40c1-9fec-fd510dbad4cc name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 21 10:47:29 ingress-addon-legacy-218089 crio[960]: time="2023-08-21 10:47:29.353364745Z" level=info msg="Stopping container: cfc27fb3bcd7196268b5afef0682f8ce36ac9544036f70891b2cd58478229ee7 (timeout: 2s)" id=90be695b-2142-4429-9d29-f76ccd338afd name=/runtime.v1alpha2.RuntimeService/StopContainer
	Aug 21 10:47:29 ingress-addon-legacy-218089 crio[960]: time="2023-08-21 10:47:29.356247399Z" level=info msg="Stopping container: cfc27fb3bcd7196268b5afef0682f8ce36ac9544036f70891b2cd58478229ee7 (timeout: 2s)" id=39acd7a7-304e-47da-bc6e-28cd8120fbea name=/runtime.v1alpha2.RuntimeService/StopContainer
	Aug 21 10:47:31 ingress-addon-legacy-218089 crio[960]: time="2023-08-21 10:47:31.362985827Z" level=warning msg="Stopping container cfc27fb3bcd7196268b5afef0682f8ce36ac9544036f70891b2cd58478229ee7 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=90be695b-2142-4429-9d29-f76ccd338afd name=/runtime.v1alpha2.RuntimeService/StopContainer
	Aug 21 10:47:31 ingress-addon-legacy-218089 conmon[3529]: conmon cfc27fb3bcd7196268b5 <ninfo>: container 3541 exited with status 137
	Aug 21 10:47:31 ingress-addon-legacy-218089 crio[960]: time="2023-08-21 10:47:31.524452629Z" level=info msg="Stopped container cfc27fb3bcd7196268b5afef0682f8ce36ac9544036f70891b2cd58478229ee7: ingress-nginx/ingress-nginx-controller-7fcf777cb7-bd4ss/controller" id=39acd7a7-304e-47da-bc6e-28cd8120fbea name=/runtime.v1alpha2.RuntimeService/StopContainer
	Aug 21 10:47:31 ingress-addon-legacy-218089 crio[960]: time="2023-08-21 10:47:31.524991639Z" level=info msg="Stopped container cfc27fb3bcd7196268b5afef0682f8ce36ac9544036f70891b2cd58478229ee7: ingress-nginx/ingress-nginx-controller-7fcf777cb7-bd4ss/controller" id=90be695b-2142-4429-9d29-f76ccd338afd name=/runtime.v1alpha2.RuntimeService/StopContainer
	Aug 21 10:47:31 ingress-addon-legacy-218089 crio[960]: time="2023-08-21 10:47:31.525121058Z" level=info msg="Stopping pod sandbox: c54b427f2745817ea571e3c65928ad2089b8231941b0b29e2ea83dbcd5a672b6" id=3271c757-9009-4a12-bb58-a3d25ced75eb name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 21 10:47:31 ingress-addon-legacy-218089 crio[960]: time="2023-08-21 10:47:31.525329023Z" level=info msg="Stopping pod sandbox: c54b427f2745817ea571e3c65928ad2089b8231941b0b29e2ea83dbcd5a672b6" id=3a2c9c34-08b3-4741-9e44-81be0ce5e237 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 21 10:47:31 ingress-addon-legacy-218089 crio[960]: time="2023-08-21 10:47:31.528185185Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-ML3ZQHRFBWWAOLZA - [0:0]\n:KUBE-HP-DUZFRY2BGHXYAG7Z - [0:0]\n-X KUBE-HP-ML3ZQHRFBWWAOLZA\n-X KUBE-HP-DUZFRY2BGHXYAG7Z\nCOMMIT\n"
	Aug 21 10:47:31 ingress-addon-legacy-218089 crio[960]: time="2023-08-21 10:47:31.529427788Z" level=info msg="Closing host port tcp:80"
	Aug 21 10:47:31 ingress-addon-legacy-218089 crio[960]: time="2023-08-21 10:47:31.529462367Z" level=info msg="Closing host port tcp:443"
	Aug 21 10:47:31 ingress-addon-legacy-218089 crio[960]: time="2023-08-21 10:47:31.530396282Z" level=info msg="Host port tcp:80 does not have an open socket"
	Aug 21 10:47:31 ingress-addon-legacy-218089 crio[960]: time="2023-08-21 10:47:31.530410521Z" level=info msg="Host port tcp:443 does not have an open socket"
	Aug 21 10:47:31 ingress-addon-legacy-218089 crio[960]: time="2023-08-21 10:47:31.530525113Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-bd4ss Namespace:ingress-nginx ID:c54b427f2745817ea571e3c65928ad2089b8231941b0b29e2ea83dbcd5a672b6 UID:8cfce820-9094-4a5d-9822-613ec58549b8 NetNS:/var/run/netns/941d6119-6114-4ad4-af2c-cc444c1e4b8d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 21 10:47:31 ingress-addon-legacy-218089 crio[960]: time="2023-08-21 10:47:31.530635773Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-bd4ss from CNI network \"kindnet\" (type=ptp)"
	Aug 21 10:47:31 ingress-addon-legacy-218089 crio[960]: time="2023-08-21 10:47:31.564903546Z" level=info msg="Stopped pod sandbox: c54b427f2745817ea571e3c65928ad2089b8231941b0b29e2ea83dbcd5a672b6" id=3271c757-9009-4a12-bb58-a3d25ced75eb name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 21 10:47:31 ingress-addon-legacy-218089 crio[960]: time="2023-08-21 10:47:31.565042267Z" level=info msg="Stopped pod sandbox (already stopped): c54b427f2745817ea571e3c65928ad2089b8231941b0b29e2ea83dbcd5a672b6" id=3a2c9c34-08b3-4741-9e44-81be0ce5e237 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	15fc184abc723       gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea            23 seconds ago      Running             hello-world-app           0                   bc2027128f625       hello-world-app-5f5d8b66bb-c7bbd
	01a94df24bd7c       docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a                    2 minutes ago       Running             nginx                     0                   3a26530d53183       nginx
	cfc27fb3bcd71       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   c54b427f27458       ingress-nginx-controller-7fcf777cb7-bd4ss
	eea2cb1af8ded       a013daf8730dbb3908d66f67c57053f09055fddb28fde0b5808cb24c27900dc8                                                   3 minutes ago       Exited              patch                     1                   b2c42980102ac       ingress-nginx-admission-patch-t7l9l
	b4ebf40c9e4db       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   c0673772f675b       ingress-nginx-admission-create-pw2q6
	baa25673c7981       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   75efa1936535c       coredns-66bff467f8-67rfg
	1704519fbd4d5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   cdeaf8a8e2fb9       storage-provisioner
	331e14b277202       docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974                 3 minutes ago       Running             kindnet-cni               0                   15648ce3151fa       kindnet-gcdzd
	2d65a78dc5312       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   72c438640dfec       kube-proxy-vbx9l
	bffceb376fff0       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   3 minutes ago       Running             etcd                      0                   0fd8523b43e84       etcd-ingress-addon-legacy-218089
	a3f808853f4cf       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   3 minutes ago       Running             kube-apiserver            0                   eb0dbda502863       kube-apiserver-ingress-addon-legacy-218089
	470f6a4b84b7c       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   3 minutes ago       Running             kube-scheduler            0                   1874f5157fe88       kube-scheduler-ingress-addon-legacy-218089
	13e358ef01665       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   3 minutes ago       Running             kube-controller-manager   0                   fdc7572d8f8eb       kube-controller-manager-ingress-addon-legacy-218089
	
	* 
	* ==> coredns [baa25673c7981b741cc186dd7f3a9007c79e3bdfe0e02543ad6b95008915d278] <==
	* [INFO] 10.244.0.5:37496 - 62915 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.005262155s
	[INFO] 10.244.0.5:44122 - 12122 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004336298s
	[INFO] 10.244.0.5:35693 - 48075 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004545758s
	[INFO] 10.244.0.5:42746 - 20374 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004297742s
	[INFO] 10.244.0.5:37496 - 4480 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004177705s
	[INFO] 10.244.0.5:39664 - 34470 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004183725s
	[INFO] 10.244.0.5:47524 - 63992 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004348269s
	[INFO] 10.244.0.5:45328 - 18968 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00449042s
	[INFO] 10.244.0.5:51951 - 1839 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004481144s
	[INFO] 10.244.0.5:37496 - 30052 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004949265s
	[INFO] 10.244.0.5:39664 - 319 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004979737s
	[INFO] 10.244.0.5:44122 - 2052 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005024454s
	[INFO] 10.244.0.5:35693 - 38085 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005070353s
	[INFO] 10.244.0.5:45328 - 7548 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005003272s
	[INFO] 10.244.0.5:47524 - 5625 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005008738s
	[INFO] 10.244.0.5:51951 - 38869 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005006148s
	[INFO] 10.244.0.5:44122 - 30415 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000061779s
	[INFO] 10.244.0.5:39664 - 40606 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000146945s
	[INFO] 10.244.0.5:51951 - 5606 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000046396s
	[INFO] 10.244.0.5:47524 - 6231 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000051974s
	[INFO] 10.244.0.5:35693 - 20745 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000183472s
	[INFO] 10.244.0.5:45328 - 45822 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000223741s
	[INFO] 10.244.0.5:42746 - 55694 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005382931s
	[INFO] 10.244.0.5:37496 - 62547 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000450544s
	[INFO] 10.244.0.5:42746 - 23913 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000070766s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-218089
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-218089
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43
	                    minikube.k8s.io/name=ingress-addon-legacy-218089
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_21T10_43_48_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 21 Aug 2023 10:43:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-218089
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 21 Aug 2023 10:47:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 21 Aug 2023 10:47:18 +0000   Mon, 21 Aug 2023 10:43:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 21 Aug 2023 10:47:18 +0000   Mon, 21 Aug 2023 10:43:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 21 Aug 2023 10:47:18 +0000   Mon, 21 Aug 2023 10:43:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 21 Aug 2023 10:47:18 +0000   Mon, 21 Aug 2023 10:44:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-218089
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	System Info:
	  Machine ID:                 f8dbd8b364fb41c68174235578e5f3c1
	  System UUID:                4f88b7c3-cd39-4876-ab22-068709c6fdf6
	  Boot ID:                    19bba9d5-fb53-4c36-8f17-b39d772f0931
	  Kernel Version:             5.15.0-1039-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-c7bbd                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 coredns-66bff467f8-67rfg                               100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m34s
	  kube-system                 etcd-ingress-addon-legacy-218089                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	  kube-system                 kindnet-gcdzd                                          100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m34s
	  kube-system                 kube-apiserver-ingress-addon-legacy-218089             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-218089    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kube-proxy-vbx9l                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m34s
	  kube-system                 kube-scheduler-ingress-addon-legacy-218089             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             120Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  3m56s (x4 over 3m56s)  kubelet     Node ingress-addon-legacy-218089 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m56s (x4 over 3m56s)  kubelet     Node ingress-addon-legacy-218089 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m56s (x4 over 3m56s)  kubelet     Node ingress-addon-legacy-218089 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m49s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m49s                  kubelet     Node ingress-addon-legacy-218089 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m49s                  kubelet     Node ingress-addon-legacy-218089 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m49s                  kubelet     Node ingress-addon-legacy-218089 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m33s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m29s                  kubelet     Node ingress-addon-legacy-218089 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.004916] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006573] FS-Cache: N-cookie d=0000000057af7611{9p.inode} n=0000000097034eab
	[  +0.007347] FS-Cache: N-key=[8] '0690130200000000'
	[  +2.906182] FS-Cache: Duplicate cookie detected
	[  +0.004707] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006744] FS-Cache: O-cookie d=00000000f9f2d848{9P.session} n=000000004e5885ae
	[  +0.007517] FS-Cache: O-key=[10] '34323935323639393534'
	[  +0.005373] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.006560] FS-Cache: N-cookie d=00000000f9f2d848{9P.session} n=000000005c3d05d3
	[  +0.007520] FS-Cache: N-key=[10] '34323935323639393534'
	[ +16.357657] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Aug21 10:45] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ca c1 a8 91 e2 bd c2 67 4a c6 ee 9c 08 00
	[  +1.028097] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: ca c1 a8 91 e2 bd c2 67 4a c6 ee 9c 08 00
	[  +2.015757] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: ca c1 a8 91 e2 bd c2 67 4a c6 ee 9c 08 00
	[  +4.063569] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ca c1 a8 91 e2 bd c2 67 4a c6 ee 9c 08 00
	[  +8.191209] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ca c1 a8 91 e2 bd c2 67 4a c6 ee 9c 08 00
	[ +16.126462] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: ca c1 a8 91 e2 bd c2 67 4a c6 ee 9c 08 00
	[Aug21 10:46] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ca c1 a8 91 e2 bd c2 67 4a c6 ee 9c 08 00
	
	* 
	* ==> etcd [bffceb376fff05cc69ae63d7bbd873c5de396ff3ff09b63ab704e2f4847cb943] <==
	* raft2023/08/21 10:43:42 INFO: aec36adc501070cc became follower at term 0
	raft2023/08/21 10:43:42 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/08/21 10:43:42 INFO: aec36adc501070cc became follower at term 1
	raft2023/08/21 10:43:42 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-08-21 10:43:42.136916 W | auth: simple token is not cryptographically signed
	2023-08-21 10:43:42.140567 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-08-21 10:43:42.141036 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/08/21 10:43:42 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-08-21 10:43:42.141676 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-08-21 10:43:42.142970 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-08-21 10:43:42.143190 I | embed: listening for peers on 192.168.49.2:2380
	2023-08-21 10:43:42.143256 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/08/21 10:43:42 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/08/21 10:43:42 INFO: aec36adc501070cc became candidate at term 2
	raft2023/08/21 10:43:42 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/08/21 10:43:42 INFO: aec36adc501070cc became leader at term 2
	raft2023/08/21 10:43:42 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-08-21 10:43:42.971542 I | etcdserver: published {Name:ingress-addon-legacy-218089 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-08-21 10:43:42.971561 I | embed: ready to serve client requests
	2023-08-21 10:43:42.971608 I | embed: ready to serve client requests
	2023-08-21 10:43:42.971674 I | etcdserver: setting up the initial cluster version to 3.4
	2023-08-21 10:43:42.972604 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-08-21 10:43:42.972739 I | etcdserver/api: enabled capabilities for version 3.4
	2023-08-21 10:43:42.973803 I | embed: serving client requests on 192.168.49.2:2379
	2023-08-21 10:43:42.973816 I | embed: serving client requests on 127.0.0.1:2379
	
	* 
	* ==> kernel <==
	*  10:47:37 up 30 min,  0 users,  load average: 0.48, 0.53, 0.40
	Linux ingress-addon-legacy-218089 5.15.0-1039-gcp #47~20.04.1-Ubuntu SMP Thu Jul 27 22:40:03 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [331e14b277202f1d9d01c5ba133ff0db8aeed60ab629d68e4f187bb55d807694] <==
	* I0821 10:45:27.581223       1 main.go:227] handling current node
	I0821 10:45:37.584563       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 10:45:37.584588       1 main.go:227] handling current node
	I0821 10:45:47.595198       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 10:45:47.595222       1 main.go:227] handling current node
	I0821 10:45:57.599272       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 10:45:57.599309       1 main.go:227] handling current node
	I0821 10:46:07.603292       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 10:46:07.603315       1 main.go:227] handling current node
	I0821 10:46:17.606343       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 10:46:17.606369       1 main.go:227] handling current node
	I0821 10:46:27.618459       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 10:46:27.618484       1 main.go:227] handling current node
	I0821 10:46:37.622642       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 10:46:37.622668       1 main.go:227] handling current node
	I0821 10:46:47.634859       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 10:46:47.634885       1 main.go:227] handling current node
	I0821 10:46:57.638476       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 10:46:57.638501       1 main.go:227] handling current node
	I0821 10:47:07.642437       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 10:47:07.642463       1 main.go:227] handling current node
	I0821 10:47:17.647046       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 10:47:17.647076       1 main.go:227] handling current node
	I0821 10:47:27.650618       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 10:47:27.650648       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [a3f808853f4cf6190455f5535c4cfbb4a72b803067907095aa35e59665fe5260] <==
	* I0821 10:43:45.735763       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0821 10:43:45.735763       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0821 10:43:45.735781       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0821 10:43:45.738409       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0821 10:43:45.739080       1 cache.go:39] Caches are synced for autoregister controller
	I0821 10:43:46.578289       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0821 10:43:46.578409       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0821 10:43:46.582722       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0821 10:43:46.585287       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0821 10:43:46.585305       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0821 10:43:46.845705       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0821 10:43:46.873236       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0821 10:43:46.966183       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0821 10:43:46.966947       1 controller.go:609] quota admission added evaluator for: endpoints
	I0821 10:43:46.969509       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0821 10:43:47.994763       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0821 10:43:48.172558       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0821 10:43:48.356877       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0821 10:43:48.585976       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0821 10:44:03.934282       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0821 10:44:03.967797       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0821 10:44:03.967797       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0821 10:44:23.239157       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0821 10:44:50.883276       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E0821 10:47:29.365313       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	* 
	* ==> kube-controller-manager [13e358ef01665c7bfcc36bd8e71f31f9c98f8ff44a1ce75d4c32c5e7c141f62e] <==
	* I0821 10:44:04.282471       1 shared_informer.go:230] Caches are synced for stateful set 
	I0821 10:44:04.288060       1 shared_informer.go:230] Caches are synced for attach detach 
	I0821 10:44:04.313732       1 shared_informer.go:230] Caches are synced for persistent volume 
	I0821 10:44:04.331747       1 shared_informer.go:230] Caches are synced for PVC protection 
	I0821 10:44:04.331917       1 shared_informer.go:230] Caches are synced for job 
	I0821 10:44:04.332113       1 shared_informer.go:230] Caches are synced for expand 
	I0821 10:44:04.431767       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
	I0821 10:44:04.473998       1 shared_informer.go:230] Caches are synced for resource quota 
	I0821 10:44:04.485059       1 shared_informer.go:230] Caches are synced for resource quota 
	I0821 10:44:04.492880       1 shared_informer.go:230] Caches are synced for namespace 
	I0821 10:44:04.532247       1 shared_informer.go:230] Caches are synced for service account 
	I0821 10:44:04.546086       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0821 10:44:04.546172       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0821 10:44:04.582198       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0821 10:44:04.679192       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"51446498-7894-4b35-a4b3-968e29f2f15d", APIVersion:"apps/v1", ResourceVersion:"371", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0821 10:44:04.742281       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"998d2731-61f2-400a-a7fa-a7d68388ccde", APIVersion:"apps/v1", ResourceVersion:"373", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-pb8qr
	I0821 10:44:08.984893       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0821 10:44:23.236051       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"a501d6ca-ff26-4c70-83b8-9e510f2f828c", APIVersion:"apps/v1", ResourceVersion:"478", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0821 10:44:23.243760       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"0497e103-860e-4df8-b7d4-ef65aa319fcd", APIVersion:"apps/v1", ResourceVersion:"479", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-bd4ss
	I0821 10:44:23.246273       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"867bc728-0029-4a76-b673-9b88bad938ef", APIVersion:"batch/v1", ResourceVersion:"482", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-pw2q6
	I0821 10:44:23.258358       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"5a1ba37a-5b4f-4113-bb83-8689e50514f5", APIVersion:"batch/v1", ResourceVersion:"492", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-t7l9l
	I0821 10:44:26.705035       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"867bc728-0029-4a76-b673-9b88bad938ef", APIVersion:"batch/v1", ResourceVersion:"494", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0821 10:44:27.750615       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"5a1ba37a-5b4f-4113-bb83-8689e50514f5", APIVersion:"batch/v1", ResourceVersion:"500", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0821 10:47:11.711039       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"d17ec393-85d2-4f74-b8bc-d9e8daa30270", APIVersion:"apps/v1", ResourceVersion:"725", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0821 10:47:11.717836       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"24f2914c-ddad-4bde-a67f-69e6246aade0", APIVersion:"apps/v1", ResourceVersion:"726", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-c7bbd
	
	* 
	* ==> kube-proxy [2d65a78dc5312d9a87b541a73871ecd8d6eca0dc42d894e823ce4cea5385d0e0] <==
	* W0821 10:44:04.459686       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0821 10:44:04.465542       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0821 10:44:04.465566       1 server_others.go:186] Using iptables Proxier.
	I0821 10:44:04.465834       1 server.go:583] Version: v1.18.20
	I0821 10:44:04.466323       1 config.go:133] Starting endpoints config controller
	I0821 10:44:04.466345       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0821 10:44:04.466379       1 config.go:315] Starting service config controller
	I0821 10:44:04.466394       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0821 10:44:04.566515       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0821 10:44:04.566535       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [470f6a4b84b7ce30482cce563b3dff4b3ea9a3029f306108805946ce36a6ec80] <==
	* W0821 10:43:45.736763       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0821 10:43:45.749182       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0821 10:43:45.749210       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0821 10:43:45.750791       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0821 10:43:45.750815       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0821 10:43:45.751070       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0821 10:43:45.751136       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0821 10:43:45.753450       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0821 10:43:45.754485       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0821 10:43:45.755241       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0821 10:43:45.755446       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0821 10:43:45.755586       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0821 10:43:45.755680       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0821 10:43:45.755732       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0821 10:43:45.755788       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0821 10:43:45.755802       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0821 10:43:45.755823       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0821 10:43:45.755922       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0821 10:43:45.755995       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0821 10:43:46.624803       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0821 10:43:46.658489       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0821 10:43:46.705601       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0821 10:43:46.743099       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0821 10:43:46.836586       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0821 10:43:48.851035       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* Aug 21 10:46:58 ingress-addon-legacy-218089 kubelet[1859]: E0821 10:46:58.605605    1859 pod_workers.go:191] Error syncing pod 6bbbd363-a9fb-4b84-a87c-c775e0d6fc83 ("kube-ingress-dns-minikube_kube-system(6bbbd363-a9fb-4b84-a87c-c775e0d6fc83)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Aug 21 10:47:11 ingress-addon-legacy-218089 kubelet[1859]: E0821 10:47:11.605384    1859 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Aug 21 10:47:11 ingress-addon-legacy-218089 kubelet[1859]: E0821 10:47:11.605431    1859 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Aug 21 10:47:11 ingress-addon-legacy-218089 kubelet[1859]: E0821 10:47:11.605483    1859 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Aug 21 10:47:11 ingress-addon-legacy-218089 kubelet[1859]: E0821 10:47:11.605517    1859 pod_workers.go:191] Error syncing pod 6bbbd363-a9fb-4b84-a87c-c775e0d6fc83 ("kube-ingress-dns-minikube_kube-system(6bbbd363-a9fb-4b84-a87c-c775e0d6fc83)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Aug 21 10:47:11 ingress-addon-legacy-218089 kubelet[1859]: I0821 10:47:11.724800    1859 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Aug 21 10:47:11 ingress-addon-legacy-218089 kubelet[1859]: I0821 10:47:11.853408    1859 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-6djvn" (UniqueName: "kubernetes.io/secret/a392ef7d-d7be-4879-9d9b-2696b2563b73-default-token-6djvn") pod "hello-world-app-5f5d8b66bb-c7bbd" (UID: "a392ef7d-d7be-4879-9d9b-2696b2563b73")
	Aug 21 10:47:12 ingress-addon-legacy-218089 kubelet[1859]: W0821 10:47:12.072384    1859 manager.go:1131] Failed to process watch event {EventType:0 Name:/docker/13b46a8946be8c48fa2eff2b81309ff44c3c400a215536d3c1ff6bcb5a8b98da/crio-bc2027128f6255bb69d016c4bd293eaccef077c72fcae2411459407a7709966f WatchSource:0}: Error finding container bc2027128f6255bb69d016c4bd293eaccef077c72fcae2411459407a7709966f: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc0003b0880 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x750800) %!!(MISSING)s(func() error=0x750790)}
	Aug 21 10:47:22 ingress-addon-legacy-218089 kubelet[1859]: E0821 10:47:22.605348    1859 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Aug 21 10:47:22 ingress-addon-legacy-218089 kubelet[1859]: E0821 10:47:22.605389    1859 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Aug 21 10:47:22 ingress-addon-legacy-218089 kubelet[1859]: E0821 10:47:22.605441    1859 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Aug 21 10:47:22 ingress-addon-legacy-218089 kubelet[1859]: E0821 10:47:22.605477    1859 pod_workers.go:191] Error syncing pod 6bbbd363-a9fb-4b84-a87c-c775e0d6fc83 ("kube-ingress-dns-minikube_kube-system(6bbbd363-a9fb-4b84-a87c-c775e0d6fc83)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Aug 21 10:47:27 ingress-addon-legacy-218089 kubelet[1859]: I0821 10:47:27.489100    1859 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-rvknf" (UniqueName: "kubernetes.io/secret/6bbbd363-a9fb-4b84-a87c-c775e0d6fc83-minikube-ingress-dns-token-rvknf") pod "6bbbd363-a9fb-4b84-a87c-c775e0d6fc83" (UID: "6bbbd363-a9fb-4b84-a87c-c775e0d6fc83")
	Aug 21 10:47:27 ingress-addon-legacy-218089 kubelet[1859]: I0821 10:47:27.490945    1859 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bbbd363-a9fb-4b84-a87c-c775e0d6fc83-minikube-ingress-dns-token-rvknf" (OuterVolumeSpecName: "minikube-ingress-dns-token-rvknf") pod "6bbbd363-a9fb-4b84-a87c-c775e0d6fc83" (UID: "6bbbd363-a9fb-4b84-a87c-c775e0d6fc83"). InnerVolumeSpecName "minikube-ingress-dns-token-rvknf". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 21 10:47:27 ingress-addon-legacy-218089 kubelet[1859]: I0821 10:47:27.589423    1859 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-rvknf" (UniqueName: "kubernetes.io/secret/6bbbd363-a9fb-4b84-a87c-c775e0d6fc83-minikube-ingress-dns-token-rvknf") on node "ingress-addon-legacy-218089" DevicePath ""
	Aug 21 10:47:29 ingress-addon-legacy-218089 kubelet[1859]: W0821 10:47:29.007205    1859 pod_container_deletor.go:77] Container "7e71e35ff25fabf58a7bea1364156a379211e15171ac8965ac39ba85891c9f0a" not found in pod's containers
	Aug 21 10:47:29 ingress-addon-legacy-218089 kubelet[1859]: E0821 10:47:29.355428    1859 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-bd4ss.177d603cd53bd127", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-bd4ss", UID:"8cfce820-9094-4a5d-9822-613ec58549b8", APIVersion:"v1", ResourceVersion:"485", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-218089"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc130ee1055098727, ext:221214902273, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc130ee1055098727, ext:221214902273, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-bd4ss.177d603cd53bd127" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 21 10:47:29 ingress-addon-legacy-218089 kubelet[1859]: E0821 10:47:29.358439    1859 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-bd4ss.177d603cd53bd127", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-bd4ss", UID:"8cfce820-9094-4a5d-9822-613ec58549b8", APIVersion:"v1", ResourceVersion:"485", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-218089"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc130ee1055098727, ext:221214902273, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc130ee105532f8ca, ext:221217618343, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-bd4ss.177d603cd53bd127" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 21 10:47:32 ingress-addon-legacy-218089 kubelet[1859]: W0821 10:47:32.012654    1859 pod_container_deletor.go:77] Container "c54b427f2745817ea571e3c65928ad2089b8231941b0b29e2ea83dbcd5a672b6" not found in pod's containers
	Aug 21 10:47:33 ingress-addon-legacy-218089 kubelet[1859]: I0821 10:47:33.545297    1859 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/8cfce820-9094-4a5d-9822-613ec58549b8-webhook-cert") pod "8cfce820-9094-4a5d-9822-613ec58549b8" (UID: "8cfce820-9094-4a5d-9822-613ec58549b8")
	Aug 21 10:47:33 ingress-addon-legacy-218089 kubelet[1859]: I0821 10:47:33.545353    1859 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-dmzb5" (UniqueName: "kubernetes.io/secret/8cfce820-9094-4a5d-9822-613ec58549b8-ingress-nginx-token-dmzb5") pod "8cfce820-9094-4a5d-9822-613ec58549b8" (UID: "8cfce820-9094-4a5d-9822-613ec58549b8")
	Aug 21 10:47:33 ingress-addon-legacy-218089 kubelet[1859]: I0821 10:47:33.547131    1859 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cfce820-9094-4a5d-9822-613ec58549b8-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "8cfce820-9094-4a5d-9822-613ec58549b8" (UID: "8cfce820-9094-4a5d-9822-613ec58549b8"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 21 10:47:33 ingress-addon-legacy-218089 kubelet[1859]: I0821 10:47:33.547434    1859 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cfce820-9094-4a5d-9822-613ec58549b8-ingress-nginx-token-dmzb5" (OuterVolumeSpecName: "ingress-nginx-token-dmzb5") pod "8cfce820-9094-4a5d-9822-613ec58549b8" (UID: "8cfce820-9094-4a5d-9822-613ec58549b8"). InnerVolumeSpecName "ingress-nginx-token-dmzb5". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 21 10:47:33 ingress-addon-legacy-218089 kubelet[1859]: I0821 10:47:33.645652    1859 reconciler.go:319] Volume detached for volume "ingress-nginx-token-dmzb5" (UniqueName: "kubernetes.io/secret/8cfce820-9094-4a5d-9822-613ec58549b8-ingress-nginx-token-dmzb5") on node "ingress-addon-legacy-218089" DevicePath ""
	Aug 21 10:47:33 ingress-addon-legacy-218089 kubelet[1859]: I0821 10:47:33.645692    1859 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/8cfce820-9094-4a5d-9822-613ec58549b8-webhook-cert") on node "ingress-addon-legacy-218089" DevicePath ""
	
	* 
	* ==> storage-provisioner [1704519fbd4d56dc67403104fb05181ec947efcf7041c0c9a3c871277dc54b76] <==
	* I0821 10:44:13.705115       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0821 10:44:13.744337       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0821 10:44:13.744386       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0821 10:44:13.750933       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0821 10:44:13.751098       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-218089_ffefe084-e210-4cb2-8f7c-a4904c016c7b!
	I0821 10:44:13.753095       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2eb37f27-47b8-4174-9ba9-7f7bbd5ce80f", APIVersion:"v1", ResourceVersion:"427", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-218089_ffefe084-e210-4cb2-8f7c-a4904c016c7b became leader
	I0821 10:44:13.851586       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-218089_ffefe084-e210-4cb2-8f7c-a4904c016c7b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-218089 -n ingress-addon-legacy-218089
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-218089 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (184.33s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200985 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200985 -- exec busybox-67b7f59bb-4kkp2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200985 -- exec busybox-67b7f59bb-4kkp2 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-200985 -- exec busybox-67b7f59bb-4kkp2 -- sh -c "ping -c 1 192.168.58.1": exit status 1 (163.612226ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-67b7f59bb-4kkp2): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200985 -- exec busybox-67b7f59bb-vtjvj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200985 -- exec busybox-67b7f59bb-vtjvj -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-200985 -- exec busybox-67b7f59bb-vtjvj -- sh -c "ping -c 1 192.168.58.1": exit status 1 (165.551551ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-67b7f59bb-vtjvj): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-200985
helpers_test.go:235: (dbg) docker inspect multinode-200985:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "30a11af662edf967ffb99de2ef034ce516ea0aacab8a798c9436236a541bf91a",
	        "Created": "2023-08-21T10:52:34.209083231Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 98128,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-21T10:52:34.494134099Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/30a11af662edf967ffb99de2ef034ce516ea0aacab8a798c9436236a541bf91a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/30a11af662edf967ffb99de2ef034ce516ea0aacab8a798c9436236a541bf91a/hostname",
	        "HostsPath": "/var/lib/docker/containers/30a11af662edf967ffb99de2ef034ce516ea0aacab8a798c9436236a541bf91a/hosts",
	        "LogPath": "/var/lib/docker/containers/30a11af662edf967ffb99de2ef034ce516ea0aacab8a798c9436236a541bf91a/30a11af662edf967ffb99de2ef034ce516ea0aacab8a798c9436236a541bf91a-json.log",
	        "Name": "/multinode-200985",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-200985:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-200985",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/54fb52023b86bf97e5676ae67e00b57ab1be66bb2e0f0f4e0b85f433aaacbeff-init/diff:/var/lib/docker/overlay2/524bb0f129210e266d288d085768bab72d4735717d72ebbb4611a7bc558cb4ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/54fb52023b86bf97e5676ae67e00b57ab1be66bb2e0f0f4e0b85f433aaacbeff/merged",
	                "UpperDir": "/var/lib/docker/overlay2/54fb52023b86bf97e5676ae67e00b57ab1be66bb2e0f0f4e0b85f433aaacbeff/diff",
	                "WorkDir": "/var/lib/docker/overlay2/54fb52023b86bf97e5676ae67e00b57ab1be66bb2e0f0f4e0b85f433aaacbeff/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "multinode-200985",
	                "Source": "/var/lib/docker/volumes/multinode-200985/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-200985",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-200985",
	                "name.minikube.sigs.k8s.io": "multinode-200985",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8ed7d850b8290b6ff3bb9cdbf9b9ba2bfe9146c10a6985d67f0f77c52dc8b817",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32847"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32846"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32843"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32845"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32844"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/8ed7d850b829",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-200985": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "30a11af662ed",
	                        "multinode-200985"
	                    ],
	                    "NetworkID": "ede9cfb77cb945f92caaae7e4c2fb8bc11d3eee970d3080b083e3a9cee1733a4",
	                    "EndpointID": "54bae687c139c96942423412c533eea7e6561276a555f4befe797df15e7bd237",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-200985 -n multinode-200985
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-200985 logs -n 25: (1.258800471s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-816149                           | mount-start-2-816149 | jenkins | v1.31.2 | 21 Aug 23 10:52 UTC | 21 Aug 23 10:52 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-816149 ssh -- ls                    | mount-start-2-816149 | jenkins | v1.31.2 | 21 Aug 23 10:52 UTC | 21 Aug 23 10:52 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-801235                           | mount-start-1-801235 | jenkins | v1.31.2 | 21 Aug 23 10:52 UTC | 21 Aug 23 10:52 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-816149 ssh -- ls                    | mount-start-2-816149 | jenkins | v1.31.2 | 21 Aug 23 10:52 UTC | 21 Aug 23 10:52 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-816149                           | mount-start-2-816149 | jenkins | v1.31.2 | 21 Aug 23 10:52 UTC | 21 Aug 23 10:52 UTC |
	| start   | -p mount-start-2-816149                           | mount-start-2-816149 | jenkins | v1.31.2 | 21 Aug 23 10:52 UTC | 21 Aug 23 10:52 UTC |
	| ssh     | mount-start-2-816149 ssh -- ls                    | mount-start-2-816149 | jenkins | v1.31.2 | 21 Aug 23 10:52 UTC | 21 Aug 23 10:52 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-816149                           | mount-start-2-816149 | jenkins | v1.31.2 | 21 Aug 23 10:52 UTC | 21 Aug 23 10:52 UTC |
	| delete  | -p mount-start-1-801235                           | mount-start-1-801235 | jenkins | v1.31.2 | 21 Aug 23 10:52 UTC | 21 Aug 23 10:52 UTC |
	| start   | -p multinode-200985                               | multinode-200985     | jenkins | v1.31.2 | 21 Aug 23 10:52 UTC | 21 Aug 23 10:54 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-200985 -- apply -f                   | multinode-200985     | jenkins | v1.31.2 | 21 Aug 23 10:54 UTC | 21 Aug 23 10:54 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-200985 -- rollout                    | multinode-200985     | jenkins | v1.31.2 | 21 Aug 23 10:54 UTC | 21 Aug 23 10:54 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-200985 -- get pods -o                | multinode-200985     | jenkins | v1.31.2 | 21 Aug 23 10:54 UTC | 21 Aug 23 10:54 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-200985 -- get pods -o                | multinode-200985     | jenkins | v1.31.2 | 21 Aug 23 10:54 UTC | 21 Aug 23 10:54 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-200985 -- exec                       | multinode-200985     | jenkins | v1.31.2 | 21 Aug 23 10:54 UTC | 21 Aug 23 10:54 UTC |
	|         | busybox-67b7f59bb-4kkp2 --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-200985 -- exec                       | multinode-200985     | jenkins | v1.31.2 | 21 Aug 23 10:54 UTC | 21 Aug 23 10:54 UTC |
	|         | busybox-67b7f59bb-vtjvj --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-200985 -- exec                       | multinode-200985     | jenkins | v1.31.2 | 21 Aug 23 10:54 UTC | 21 Aug 23 10:54 UTC |
	|         | busybox-67b7f59bb-4kkp2 --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-200985 -- exec                       | multinode-200985     | jenkins | v1.31.2 | 21 Aug 23 10:54 UTC | 21 Aug 23 10:54 UTC |
	|         | busybox-67b7f59bb-vtjvj --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-200985 -- exec                       | multinode-200985     | jenkins | v1.31.2 | 21 Aug 23 10:54 UTC | 21 Aug 23 10:54 UTC |
	|         | busybox-67b7f59bb-4kkp2 -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-200985 -- exec                       | multinode-200985     | jenkins | v1.31.2 | 21 Aug 23 10:54 UTC | 21 Aug 23 10:54 UTC |
	|         | busybox-67b7f59bb-vtjvj -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-200985 -- get pods -o                | multinode-200985     | jenkins | v1.31.2 | 21 Aug 23 10:54 UTC | 21 Aug 23 10:54 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-200985 -- exec                       | multinode-200985     | jenkins | v1.31.2 | 21 Aug 23 10:54 UTC | 21 Aug 23 10:54 UTC |
	|         | busybox-67b7f59bb-4kkp2                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-200985 -- exec                       | multinode-200985     | jenkins | v1.31.2 | 21 Aug 23 10:54 UTC |                     |
	|         | busybox-67b7f59bb-4kkp2 -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-200985 -- exec                       | multinode-200985     | jenkins | v1.31.2 | 21 Aug 23 10:54 UTC | 21 Aug 23 10:54 UTC |
	|         | busybox-67b7f59bb-vtjvj                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-200985 -- exec                       | multinode-200985     | jenkins | v1.31.2 | 21 Aug 23 10:54 UTC |                     |
	|         | busybox-67b7f59bb-vtjvj -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/21 10:52:28
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0821 10:52:28.565844   97516 out.go:296] Setting OutFile to fd 1 ...
	I0821 10:52:28.565984   97516 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 10:52:28.565993   97516 out.go:309] Setting ErrFile to fd 2...
	I0821 10:52:28.565998   97516 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 10:52:28.566194   97516 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-5717/.minikube/bin
	I0821 10:52:28.566728   97516 out.go:303] Setting JSON to false
	I0821 10:52:28.567751   97516 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2099,"bootTime":1692613050,"procs":375,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0821 10:52:28.567805   97516 start.go:138] virtualization: kvm guest
	I0821 10:52:28.570374   97516 out.go:177] * [multinode-200985] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0821 10:52:28.571868   97516 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 10:52:28.573261   97516 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 10:52:28.571920   97516 notify.go:220] Checking for updates...
	I0821 10:52:28.576234   97516 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17102-5717/kubeconfig
	I0821 10:52:28.577570   97516 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-5717/.minikube
	I0821 10:52:28.578852   97516 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0821 10:52:28.580528   97516 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 10:52:28.582006   97516 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 10:52:28.603101   97516 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0821 10:52:28.603207   97516 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 10:52:28.654027   97516 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:36 SystemTime:2023-08-21 10:52:28.646089885 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0821 10:52:28.654142   97516 docker.go:294] overlay module found
	I0821 10:52:28.656139   97516 out.go:177] * Using the docker driver based on user configuration
	I0821 10:52:28.657518   97516 start.go:298] selected driver: docker
	I0821 10:52:28.657529   97516 start.go:902] validating driver "docker" against <nil>
	I0821 10:52:28.657540   97516 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 10:52:28.658259   97516 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 10:52:28.714394   97516 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:36 SystemTime:2023-08-21 10:52:28.706436863 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0821 10:52:28.714629   97516 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0821 10:52:28.714831   97516 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0821 10:52:28.716809   97516 out.go:177] * Using Docker driver with root privileges
	I0821 10:52:28.718243   97516 cni.go:84] Creating CNI manager for ""
	I0821 10:52:28.718257   97516 cni.go:136] 0 nodes found, recommending kindnet
	I0821 10:52:28.718268   97516 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0821 10:52:28.718287   97516 start_flags.go:319] config:
	{Name:multinode-200985 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-200985 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 10:52:28.719875   97516 out.go:177] * Starting control plane node multinode-200985 in cluster multinode-200985
	I0821 10:52:28.721267   97516 cache.go:122] Beginning downloading kic base image for docker with crio
	I0821 10:52:28.722668   97516 out.go:177] * Pulling base image ...
	I0821 10:52:28.723940   97516 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0821 10:52:28.723963   97516 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0821 10:52:28.723978   97516 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17102-5717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4
	I0821 10:52:28.723990   97516 cache.go:57] Caching tarball of preloaded images
	I0821 10:52:28.724099   97516 preload.go:174] Found /home/jenkins/minikube-integration/17102-5717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0821 10:52:28.724109   97516 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on crio
	I0821 10:52:28.724442   97516 profile.go:148] Saving config to /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/config.json ...
	I0821 10:52:28.724474   97516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/config.json: {Name:mk0829b9f45d898f2f5457b9b3c6e3bfb3cf25a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 10:52:28.739645   97516 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0821 10:52:28.739667   97516 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0821 10:52:28.739681   97516 cache.go:195] Successfully downloaded all kic artifacts
	I0821 10:52:28.739717   97516 start.go:365] acquiring machines lock for multinode-200985: {Name:mk2c090804925662dba271079c7531f97af8a121 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 10:52:28.739811   97516 start.go:369] acquired machines lock for "multinode-200985" in 75.2µs
	I0821 10:52:28.739837   97516 start.go:93] Provisioning new machine with config: &{Name:multinode-200985 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-200985 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0821 10:52:28.739921   97516 start.go:125] createHost starting for "" (driver="docker")
	I0821 10:52:28.741909   97516 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0821 10:52:28.742113   97516 start.go:159] libmachine.API.Create for "multinode-200985" (driver="docker")
	I0821 10:52:28.742141   97516 client.go:168] LocalClient.Create starting
	I0821 10:52:28.742209   97516 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem
	I0821 10:52:28.742248   97516 main.go:141] libmachine: Decoding PEM data...
	I0821 10:52:28.742270   97516 main.go:141] libmachine: Parsing certificate...
	I0821 10:52:28.742328   97516 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17102-5717/.minikube/certs/cert.pem
	I0821 10:52:28.742353   97516 main.go:141] libmachine: Decoding PEM data...
	I0821 10:52:28.742383   97516 main.go:141] libmachine: Parsing certificate...
	I0821 10:52:28.742725   97516 cli_runner.go:164] Run: docker network inspect multinode-200985 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0821 10:52:28.757979   97516 cli_runner.go:211] docker network inspect multinode-200985 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0821 10:52:28.758060   97516 network_create.go:281] running [docker network inspect multinode-200985] to gather additional debugging logs...
	I0821 10:52:28.758082   97516 cli_runner.go:164] Run: docker network inspect multinode-200985
	W0821 10:52:28.773875   97516 cli_runner.go:211] docker network inspect multinode-200985 returned with exit code 1
	I0821 10:52:28.773899   97516 network_create.go:284] error running [docker network inspect multinode-200985]: docker network inspect multinode-200985: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-200985 not found
	I0821 10:52:28.773910   97516 network_create.go:286] output of [docker network inspect multinode-200985]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-200985 not found
	
	** /stderr **
	I0821 10:52:28.773958   97516 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0821 10:52:28.789675   97516 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-cdc8f51f403f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:e7:16:ac:33} reservation:<nil>}
	I0821 10:52:28.790118   97516 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001475ab0}
	I0821 10:52:28.790137   97516 network_create.go:123] attempt to create docker network multinode-200985 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0821 10:52:28.790192   97516 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-200985 multinode-200985
	I0821 10:52:28.840285   97516 network_create.go:107] docker network multinode-200985 192.168.58.0/24 created
	I0821 10:52:28.840311   97516 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-200985" container
	I0821 10:52:28.840375   97516 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0821 10:52:28.856038   97516 cli_runner.go:164] Run: docker volume create multinode-200985 --label name.minikube.sigs.k8s.io=multinode-200985 --label created_by.minikube.sigs.k8s.io=true
	I0821 10:52:28.872603   97516 oci.go:103] Successfully created a docker volume multinode-200985
	I0821 10:52:28.872665   97516 cli_runner.go:164] Run: docker run --rm --name multinode-200985-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-200985 --entrypoint /usr/bin/test -v multinode-200985:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0821 10:52:29.373837   97516 oci.go:107] Successfully prepared a docker volume multinode-200985
	I0821 10:52:29.373873   97516 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0821 10:52:29.373896   97516 kic.go:190] Starting extracting preloaded images to volume ...
	I0821 10:52:29.373967   97516 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17102-5717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-200985:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0821 10:52:34.144108   97516 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17102-5717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-200985:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.770069612s)
	I0821 10:52:34.144142   97516 kic.go:199] duration metric: took 4.770243 seconds to extract preloaded images to volume
	W0821 10:52:34.144260   97516 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0821 10:52:34.144347   97516 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0821 10:52:34.194782   97516 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-200985 --name multinode-200985 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-200985 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-200985 --network multinode-200985 --ip 192.168.58.2 --volume multinode-200985:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0821 10:52:34.502152   97516 cli_runner.go:164] Run: docker container inspect multinode-200985 --format={{.State.Running}}
	I0821 10:52:34.518159   97516 cli_runner.go:164] Run: docker container inspect multinode-200985 --format={{.State.Status}}
	I0821 10:52:34.534865   97516 cli_runner.go:164] Run: docker exec multinode-200985 stat /var/lib/dpkg/alternatives/iptables
	I0821 10:52:34.594956   97516 oci.go:144] the created container "multinode-200985" has a running status.
	I0821 10:52:34.594989   97516 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17102-5717/.minikube/machines/multinode-200985/id_rsa...
	I0821 10:52:34.799989   97516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/machines/multinode-200985/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0821 10:52:34.800045   97516 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17102-5717/.minikube/machines/multinode-200985/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0821 10:52:34.820838   97516 cli_runner.go:164] Run: docker container inspect multinode-200985 --format={{.State.Status}}
	I0821 10:52:34.837599   97516 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0821 10:52:34.837619   97516 kic_runner.go:114] Args: [docker exec --privileged multinode-200985 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0821 10:52:34.947063   97516 cli_runner.go:164] Run: docker container inspect multinode-200985 --format={{.State.Status}}
	I0821 10:52:34.967191   97516 machine.go:88] provisioning docker machine ...
	I0821 10:52:34.967241   97516 ubuntu.go:169] provisioning hostname "multinode-200985"
	I0821 10:52:34.967313   97516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-200985
	I0821 10:52:34.987956   97516 main.go:141] libmachine: Using SSH client type: native
	I0821 10:52:34.988381   97516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0821 10:52:34.988398   97516 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-200985 && echo "multinode-200985" | sudo tee /etc/hostname
	I0821 10:52:35.198075   97516 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-200985
	
	I0821 10:52:35.198143   97516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-200985
	I0821 10:52:35.216282   97516 main.go:141] libmachine: Using SSH client type: native
	I0821 10:52:35.216813   97516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0821 10:52:35.216838   97516 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-200985' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-200985/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-200985' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0821 10:52:35.343058   97516 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0821 10:52:35.343083   97516 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17102-5717/.minikube CaCertPath:/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17102-5717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17102-5717/.minikube}
	I0821 10:52:35.343103   97516 ubuntu.go:177] setting up certificates
	I0821 10:52:35.343112   97516 provision.go:83] configureAuth start
	I0821 10:52:35.343152   97516 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-200985
	I0821 10:52:35.358838   97516 provision.go:138] copyHostCerts
	I0821 10:52:35.358870   97516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17102-5717/.minikube/ca.pem
	I0821 10:52:35.358902   97516 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-5717/.minikube/ca.pem, removing ...
	I0821 10:52:35.358911   97516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-5717/.minikube/ca.pem
	I0821 10:52:35.358973   97516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17102-5717/.minikube/ca.pem (1078 bytes)
	I0821 10:52:35.359038   97516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17102-5717/.minikube/cert.pem
	I0821 10:52:35.359055   97516 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-5717/.minikube/cert.pem, removing ...
	I0821 10:52:35.359062   97516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-5717/.minikube/cert.pem
	I0821 10:52:35.359084   97516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17102-5717/.minikube/cert.pem (1123 bytes)
	I0821 10:52:35.359129   97516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17102-5717/.minikube/key.pem
	I0821 10:52:35.359144   97516 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-5717/.minikube/key.pem, removing ...
	I0821 10:52:35.359149   97516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-5717/.minikube/key.pem
	I0821 10:52:35.359168   97516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17102-5717/.minikube/key.pem (1675 bytes)
	I0821 10:52:35.359210   97516 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17102-5717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca-key.pem org=jenkins.multinode-200985 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-200985]
	I0821 10:52:35.426104   97516 provision.go:172] copyRemoteCerts
	I0821 10:52:35.426161   97516 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0821 10:52:35.426195   97516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-200985
	I0821 10:52:35.442437   97516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/multinode-200985/id_rsa Username:docker}
	I0821 10:52:35.531079   97516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0821 10:52:35.531142   97516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0821 10:52:35.551958   97516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0821 10:52:35.552022   97516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0821 10:52:35.572933   97516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0821 10:52:35.572991   97516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0821 10:52:35.593093   97516 provision.go:86] duration metric: configureAuth took 249.970733ms
	I0821 10:52:35.593116   97516 ubuntu.go:193] setting minikube options for container-runtime
	I0821 10:52:35.593273   97516 config.go:182] Loaded profile config "multinode-200985": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0821 10:52:35.593367   97516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-200985
	I0821 10:52:35.609115   97516 main.go:141] libmachine: Using SSH client type: native
	I0821 10:52:35.609677   97516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0821 10:52:35.609705   97516 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0821 10:52:35.812074   97516 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0821 10:52:35.812104   97516 machine.go:91] provisioned docker machine in 844.881687ms
	I0821 10:52:35.812116   97516 client.go:171] LocalClient.Create took 7.06996785s
	I0821 10:52:35.812139   97516 start.go:167] duration metric: libmachine.API.Create for "multinode-200985" took 7.070025935s
	I0821 10:52:35.812150   97516 start.go:300] post-start starting for "multinode-200985" (driver="docker")
	I0821 10:52:35.812162   97516 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0821 10:52:35.812252   97516 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0821 10:52:35.812304   97516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-200985
	I0821 10:52:35.828033   97516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/multinode-200985/id_rsa Username:docker}
	I0821 10:52:35.919496   97516 ssh_runner.go:195] Run: cat /etc/os-release
	I0821 10:52:35.922178   97516 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.2 LTS"
	I0821 10:52:35.922191   97516 command_runner.go:130] > NAME="Ubuntu"
	I0821 10:52:35.922196   97516 command_runner.go:130] > VERSION_ID="22.04"
	I0821 10:52:35.922201   97516 command_runner.go:130] > VERSION="22.04.2 LTS (Jammy Jellyfish)"
	I0821 10:52:35.922206   97516 command_runner.go:130] > VERSION_CODENAME=jammy
	I0821 10:52:35.922210   97516 command_runner.go:130] > ID=ubuntu
	I0821 10:52:35.922217   97516 command_runner.go:130] > ID_LIKE=debian
	I0821 10:52:35.922221   97516 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0821 10:52:35.922226   97516 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0821 10:52:35.922231   97516 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0821 10:52:35.922238   97516 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0821 10:52:35.922245   97516 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0821 10:52:35.922284   97516 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0821 10:52:35.922305   97516 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0821 10:52:35.922315   97516 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0821 10:52:35.922323   97516 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0821 10:52:35.922331   97516 filesync.go:126] Scanning /home/jenkins/minikube-integration/17102-5717/.minikube/addons for local assets ...
	I0821 10:52:35.922386   97516 filesync.go:126] Scanning /home/jenkins/minikube-integration/17102-5717/.minikube/files for local assets ...
	I0821 10:52:35.922452   97516 filesync.go:149] local asset: /home/jenkins/minikube-integration/17102-5717/.minikube/files/etc/ssl/certs/124602.pem -> 124602.pem in /etc/ssl/certs
	I0821 10:52:35.922461   97516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/files/etc/ssl/certs/124602.pem -> /etc/ssl/certs/124602.pem
	I0821 10:52:35.922531   97516 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0821 10:52:35.929690   97516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/files/etc/ssl/certs/124602.pem --> /etc/ssl/certs/124602.pem (1708 bytes)
	I0821 10:52:35.949489   97516 start.go:303] post-start completed in 137.327335ms
	I0821 10:52:35.949849   97516 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-200985
	I0821 10:52:35.966178   97516 profile.go:148] Saving config to /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/config.json ...
	I0821 10:52:35.966399   97516 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0821 10:52:35.966435   97516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-200985
	I0821 10:52:35.981806   97516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/multinode-200985/id_rsa Username:docker}
	I0821 10:52:36.067525   97516 command_runner.go:130] > 19%!
	(MISSING)I0821 10:52:36.067687   97516 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0821 10:52:36.071551   97516 command_runner.go:130] > 238G
	I0821 10:52:36.071574   97516 start.go:128] duration metric: createHost completed in 7.331644954s
	I0821 10:52:36.071585   97516 start.go:83] releasing machines lock for "multinode-200985", held for 7.331762748s
	I0821 10:52:36.071653   97516 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-200985
	I0821 10:52:36.087372   97516 ssh_runner.go:195] Run: cat /version.json
	I0821 10:52:36.087422   97516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-200985
	I0821 10:52:36.087421   97516 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0821 10:52:36.087482   97516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-200985
	I0821 10:52:36.105001   97516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/multinode-200985/id_rsa Username:docker}
	I0821 10:52:36.105330   97516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/multinode-200985/id_rsa Username:docker}
	I0821 10:52:36.278302   97516 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0821 10:52:36.280367   97516 command_runner.go:130] > {"iso_version": "v1.30.1-1689243309-16875", "kicbase_version": "v0.0.40", "minikube_version": "v1.31.0", "commit": "085433cd1b734742870dea5be8f9ee2ce4c54148"}
	I0821 10:52:36.280555   97516 ssh_runner.go:195] Run: systemctl --version
	I0821 10:52:36.284302   97516 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.9)
	I0821 10:52:36.284326   97516 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0821 10:52:36.284461   97516 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0821 10:52:36.419501   97516 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0821 10:52:36.423495   97516 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0821 10:52:36.423521   97516 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0821 10:52:36.423528   97516 command_runner.go:130] > Device: 34h/52d	Inode: 540046      Links: 1
	I0821 10:52:36.423534   97516 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0821 10:52:36.423544   97516 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0821 10:52:36.423553   97516 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0821 10:52:36.423561   97516 command_runner.go:130] > Change: 2023-08-21 10:33:50.706032318 +0000
	I0821 10:52:36.423570   97516 command_runner.go:130] >  Birth: 2023-08-21 10:33:50.706032318 +0000
	I0821 10:52:36.423628   97516 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0821 10:52:36.440417   97516 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0821 10:52:36.440502   97516 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0821 10:52:36.467322   97516 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0821 10:52:36.467376   97516 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0821 10:52:36.467386   97516 start.go:466] detecting cgroup driver to use...
	I0821 10:52:36.467420   97516 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0821 10:52:36.467462   97516 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0821 10:52:36.480239   97516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0821 10:52:36.489628   97516 docker.go:196] disabling cri-docker service (if available) ...
	I0821 10:52:36.489690   97516 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0821 10:52:36.501125   97516 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0821 10:52:36.512989   97516 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0821 10:52:36.583991   97516 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0821 10:52:36.666272   97516 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0821 10:52:36.666303   97516 docker.go:212] disabling docker service ...
	I0821 10:52:36.666345   97516 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0821 10:52:36.683050   97516 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0821 10:52:36.693100   97516 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0821 10:52:36.768530   97516 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0821 10:52:36.768607   97516 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0821 10:52:36.848553   97516 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0821 10:52:36.848619   97516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0821 10:52:36.858641   97516 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0821 10:52:36.871316   97516 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0821 10:52:36.872093   97516 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0821 10:52:36.872158   97516 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 10:52:36.880158   97516 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0821 10:52:36.880202   97516 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 10:52:36.888128   97516 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 10:52:36.895873   97516 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 10:52:36.904001   97516 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0821 10:52:36.911814   97516 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0821 10:52:36.918809   97516 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0821 10:52:36.918873   97516 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0821 10:52:36.925813   97516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 10:52:37.003830   97516 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0821 10:52:37.091575   97516 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0821 10:52:37.091640   97516 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0821 10:52:37.094702   97516 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0821 10:52:37.094725   97516 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0821 10:52:37.094734   97516 command_runner.go:130] > Device: 40h/64d	Inode: 186         Links: 1
	I0821 10:52:37.094744   97516 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0821 10:52:37.094756   97516 command_runner.go:130] > Access: 2023-08-21 10:52:37.078103092 +0000
	I0821 10:52:37.094770   97516 command_runner.go:130] > Modify: 2023-08-21 10:52:37.078103092 +0000
	I0821 10:52:37.094779   97516 command_runner.go:130] > Change: 2023-08-21 10:52:37.078103092 +0000
	I0821 10:52:37.094787   97516 command_runner.go:130] >  Birth: -
	I0821 10:52:37.094806   97516 start.go:534] Will wait 60s for crictl version
	I0821 10:52:37.094851   97516 ssh_runner.go:195] Run: which crictl
	I0821 10:52:37.097650   97516 command_runner.go:130] > /usr/bin/crictl
	I0821 10:52:37.097718   97516 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0821 10:52:37.126170   97516 command_runner.go:130] > Version:  0.1.0
	I0821 10:52:37.126193   97516 command_runner.go:130] > RuntimeName:  cri-o
	I0821 10:52:37.126201   97516 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0821 10:52:37.126224   97516 command_runner.go:130] > RuntimeApiVersion:  v1
	I0821 10:52:37.128053   97516 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0821 10:52:37.128136   97516 ssh_runner.go:195] Run: crio --version
	I0821 10:52:37.159520   97516 command_runner.go:130] > crio version 1.24.6
	I0821 10:52:37.159539   97516 command_runner.go:130] > Version:          1.24.6
	I0821 10:52:37.159545   97516 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0821 10:52:37.159550   97516 command_runner.go:130] > GitTreeState:     clean
	I0821 10:52:37.159555   97516 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0821 10:52:37.159559   97516 command_runner.go:130] > GoVersion:        go1.18.2
	I0821 10:52:37.159563   97516 command_runner.go:130] > Compiler:         gc
	I0821 10:52:37.159567   97516 command_runner.go:130] > Platform:         linux/amd64
	I0821 10:52:37.159572   97516 command_runner.go:130] > Linkmode:         dynamic
	I0821 10:52:37.159579   97516 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0821 10:52:37.159583   97516 command_runner.go:130] > SeccompEnabled:   true
	I0821 10:52:37.159591   97516 command_runner.go:130] > AppArmorEnabled:  false
	I0821 10:52:37.159645   97516 ssh_runner.go:195] Run: crio --version
	I0821 10:52:37.192681   97516 command_runner.go:130] > crio version 1.24.6
	I0821 10:52:37.192700   97516 command_runner.go:130] > Version:          1.24.6
	I0821 10:52:37.192710   97516 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0821 10:52:37.192716   97516 command_runner.go:130] > GitTreeState:     clean
	I0821 10:52:37.192724   97516 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0821 10:52:37.192729   97516 command_runner.go:130] > GoVersion:        go1.18.2
	I0821 10:52:37.192753   97516 command_runner.go:130] > Compiler:         gc
	I0821 10:52:37.192766   97516 command_runner.go:130] > Platform:         linux/amd64
	I0821 10:52:37.192776   97516 command_runner.go:130] > Linkmode:         dynamic
	I0821 10:52:37.192793   97516 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0821 10:52:37.192804   97516 command_runner.go:130] > SeccompEnabled:   true
	I0821 10:52:37.192814   97516 command_runner.go:130] > AppArmorEnabled:  false
	I0821 10:52:37.195575   97516 out.go:177] * Preparing Kubernetes v1.27.4 on CRI-O 1.24.6 ...
	I0821 10:52:37.197114   97516 cli_runner.go:164] Run: docker network inspect multinode-200985 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0821 10:52:37.212645   97516 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0821 10:52:37.215975   97516 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0821 10:52:37.225308   97516 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0821 10:52:37.225359   97516 ssh_runner.go:195] Run: sudo crictl images --output json
	I0821 10:52:37.269717   97516 command_runner.go:130] > {
	I0821 10:52:37.269735   97516 command_runner.go:130] >   "images": [
	I0821 10:52:37.269739   97516 command_runner.go:130] >     {
	I0821 10:52:37.269746   97516 command_runner.go:130] >       "id": "b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da",
	I0821 10:52:37.269751   97516 command_runner.go:130] >       "repoTags": [
	I0821 10:52:37.269757   97516 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0821 10:52:37.269760   97516 command_runner.go:130] >       ],
	I0821 10:52:37.269764   97516 command_runner.go:130] >       "repoDigests": [
	I0821 10:52:37.269772   97516 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974",
	I0821 10:52:37.269781   97516 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"
	I0821 10:52:37.269786   97516 command_runner.go:130] >       ],
	I0821 10:52:37.269791   97516 command_runner.go:130] >       "size": "65249302",
	I0821 10:52:37.269798   97516 command_runner.go:130] >       "uid": null,
	I0821 10:52:37.269801   97516 command_runner.go:130] >       "username": "",
	I0821 10:52:37.269810   97516 command_runner.go:130] >       "spec": null,
	I0821 10:52:37.269816   97516 command_runner.go:130] >       "pinned": false
	I0821 10:52:37.269820   97516 command_runner.go:130] >     },
	I0821 10:52:37.269823   97516 command_runner.go:130] >     {
	I0821 10:52:37.269829   97516 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0821 10:52:37.269834   97516 command_runner.go:130] >       "repoTags": [
	I0821 10:52:37.269839   97516 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0821 10:52:37.269845   97516 command_runner.go:130] >       ],
	I0821 10:52:37.269849   97516 command_runner.go:130] >       "repoDigests": [
	I0821 10:52:37.269858   97516 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0821 10:52:37.269867   97516 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0821 10:52:37.269871   97516 command_runner.go:130] >       ],
	I0821 10:52:37.269881   97516 command_runner.go:130] >       "size": "31470524",
	I0821 10:52:37.269889   97516 command_runner.go:130] >       "uid": null,
	I0821 10:52:37.269895   97516 command_runner.go:130] >       "username": "",
	I0821 10:52:37.269899   97516 command_runner.go:130] >       "spec": null,
	I0821 10:52:37.269903   97516 command_runner.go:130] >       "pinned": false
	I0821 10:52:37.269907   97516 command_runner.go:130] >     },
	I0821 10:52:37.269910   97516 command_runner.go:130] >     {
	I0821 10:52:37.269916   97516 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0821 10:52:37.269922   97516 command_runner.go:130] >       "repoTags": [
	I0821 10:52:37.269927   97516 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0821 10:52:37.269931   97516 command_runner.go:130] >       ],
	I0821 10:52:37.269936   97516 command_runner.go:130] >       "repoDigests": [
	I0821 10:52:37.269945   97516 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0821 10:52:37.269952   97516 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0821 10:52:37.269960   97516 command_runner.go:130] >       ],
	I0821 10:52:37.269964   97516 command_runner.go:130] >       "size": "53621675",
	I0821 10:52:37.269968   97516 command_runner.go:130] >       "uid": null,
	I0821 10:52:37.269974   97516 command_runner.go:130] >       "username": "",
	I0821 10:52:37.269978   97516 command_runner.go:130] >       "spec": null,
	I0821 10:52:37.269984   97516 command_runner.go:130] >       "pinned": false
	I0821 10:52:37.269989   97516 command_runner.go:130] >     },
	I0821 10:52:37.269993   97516 command_runner.go:130] >     {
	I0821 10:52:37.269999   97516 command_runner.go:130] >       "id": "86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681",
	I0821 10:52:37.270006   97516 command_runner.go:130] >       "repoTags": [
	I0821 10:52:37.270011   97516 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.7-0"
	I0821 10:52:37.270017   97516 command_runner.go:130] >       ],
	I0821 10:52:37.270020   97516 command_runner.go:130] >       "repoDigests": [
	I0821 10:52:37.270027   97516 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83",
	I0821 10:52:37.270038   97516 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9"
	I0821 10:52:37.270053   97516 command_runner.go:130] >       ],
	I0821 10:52:37.270060   97516 command_runner.go:130] >       "size": "297083935",
	I0821 10:52:37.270063   97516 command_runner.go:130] >       "uid": {
	I0821 10:52:37.270067   97516 command_runner.go:130] >         "value": "0"
	I0821 10:52:37.270071   97516 command_runner.go:130] >       },
	I0821 10:52:37.270077   97516 command_runner.go:130] >       "username": "",
	I0821 10:52:37.270081   97516 command_runner.go:130] >       "spec": null,
	I0821 10:52:37.270087   97516 command_runner.go:130] >       "pinned": false
	I0821 10:52:37.270092   97516 command_runner.go:130] >     },
	I0821 10:52:37.270098   97516 command_runner.go:130] >     {
	I0821 10:52:37.270104   97516 command_runner.go:130] >       "id": "e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c",
	I0821 10:52:37.270110   97516 command_runner.go:130] >       "repoTags": [
	I0821 10:52:37.270114   97516 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.27.4"
	I0821 10:52:37.270120   97516 command_runner.go:130] >       ],
	I0821 10:52:37.270124   97516 command_runner.go:130] >       "repoDigests": [
	I0821 10:52:37.270131   97516 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d",
	I0821 10:52:37.270140   97516 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:dcf39b4579f896291ec79bb2ef94ad2b51e2ad1846086df705b06dc3ae20c854"
	I0821 10:52:37.270143   97516 command_runner.go:130] >       ],
	I0821 10:52:37.270147   97516 command_runner.go:130] >       "size": "122078160",
	I0821 10:52:37.270151   97516 command_runner.go:130] >       "uid": {
	I0821 10:52:37.270155   97516 command_runner.go:130] >         "value": "0"
	I0821 10:52:37.270159   97516 command_runner.go:130] >       },
	I0821 10:52:37.270163   97516 command_runner.go:130] >       "username": "",
	I0821 10:52:37.270166   97516 command_runner.go:130] >       "spec": null,
	I0821 10:52:37.270170   97516 command_runner.go:130] >       "pinned": false
	I0821 10:52:37.270173   97516 command_runner.go:130] >     },
	I0821 10:52:37.270176   97516 command_runner.go:130] >     {
	I0821 10:52:37.270182   97516 command_runner.go:130] >       "id": "f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5",
	I0821 10:52:37.270189   97516 command_runner.go:130] >       "repoTags": [
	I0821 10:52:37.270194   97516 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.27.4"
	I0821 10:52:37.270199   97516 command_runner.go:130] >       ],
	I0821 10:52:37.270203   97516 command_runner.go:130] >       "repoDigests": [
	I0821 10:52:37.270212   97516 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265",
	I0821 10:52:37.270219   97516 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c4765f94930681526ac9179fc4e49b5254abcbfa33841af4602a52bc664f6934"
	I0821 10:52:37.270223   97516 command_runner.go:130] >       ],
	I0821 10:52:37.270227   97516 command_runner.go:130] >       "size": "113931062",
	I0821 10:52:37.270232   97516 command_runner.go:130] >       "uid": {
	I0821 10:52:37.270236   97516 command_runner.go:130] >         "value": "0"
	I0821 10:52:37.270242   97516 command_runner.go:130] >       },
	I0821 10:52:37.270246   97516 command_runner.go:130] >       "username": "",
	I0821 10:52:37.270250   97516 command_runner.go:130] >       "spec": null,
	I0821 10:52:37.270254   97516 command_runner.go:130] >       "pinned": false
	I0821 10:52:37.270257   97516 command_runner.go:130] >     },
	I0821 10:52:37.270260   97516 command_runner.go:130] >     {
	I0821 10:52:37.270267   97516 command_runner.go:130] >       "id": "6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4",
	I0821 10:52:37.270273   97516 command_runner.go:130] >       "repoTags": [
	I0821 10:52:37.270277   97516 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.27.4"
	I0821 10:52:37.270283   97516 command_runner.go:130] >       ],
	I0821 10:52:37.270287   97516 command_runner.go:130] >       "repoDigests": [
	I0821 10:52:37.270294   97516 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf",
	I0821 10:52:37.270302   97516 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ce9abe867450f8962eb851670b5869219ca0c3376777d1e18d89f9abedbe10c3"
	I0821 10:52:37.270306   97516 command_runner.go:130] >       ],
	I0821 10:52:37.270310   97516 command_runner.go:130] >       "size": "72714135",
	I0821 10:52:37.270317   97516 command_runner.go:130] >       "uid": null,
	I0821 10:52:37.270321   97516 command_runner.go:130] >       "username": "",
	I0821 10:52:37.270325   97516 command_runner.go:130] >       "spec": null,
	I0821 10:52:37.270330   97516 command_runner.go:130] >       "pinned": false
	I0821 10:52:37.270333   97516 command_runner.go:130] >     },
	I0821 10:52:37.270339   97516 command_runner.go:130] >     {
	I0821 10:52:37.270345   97516 command_runner.go:130] >       "id": "98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16",
	I0821 10:52:37.270351   97516 command_runner.go:130] >       "repoTags": [
	I0821 10:52:37.270356   97516 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.27.4"
	I0821 10:52:37.270361   97516 command_runner.go:130] >       ],
	I0821 10:52:37.270365   97516 command_runner.go:130] >       "repoDigests": [
	I0821 10:52:37.270396   97516 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af",
	I0821 10:52:37.270411   97516 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:9c58009453cfcd7533721327269d2ef0af93d09f21812a5d584c375840117da7"
	I0821 10:52:37.270417   97516 command_runner.go:130] >       ],
	I0821 10:52:37.270424   97516 command_runner.go:130] >       "size": "59814710",
	I0821 10:52:37.270430   97516 command_runner.go:130] >       "uid": {
	I0821 10:52:37.270441   97516 command_runner.go:130] >         "value": "0"
	I0821 10:52:37.270455   97516 command_runner.go:130] >       },
	I0821 10:52:37.270460   97516 command_runner.go:130] >       "username": "",
	I0821 10:52:37.270466   97516 command_runner.go:130] >       "spec": null,
	I0821 10:52:37.270470   97516 command_runner.go:130] >       "pinned": false
	I0821 10:52:37.270475   97516 command_runner.go:130] >     },
	I0821 10:52:37.270479   97516 command_runner.go:130] >     {
	I0821 10:52:37.270488   97516 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0821 10:52:37.270492   97516 command_runner.go:130] >       "repoTags": [
	I0821 10:52:37.270499   97516 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0821 10:52:37.270503   97516 command_runner.go:130] >       ],
	I0821 10:52:37.270515   97516 command_runner.go:130] >       "repoDigests": [
	I0821 10:52:37.270532   97516 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0821 10:52:37.270548   97516 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0821 10:52:37.270557   97516 command_runner.go:130] >       ],
	I0821 10:52:37.270563   97516 command_runner.go:130] >       "size": "750414",
	I0821 10:52:37.270571   97516 command_runner.go:130] >       "uid": {
	I0821 10:52:37.270575   97516 command_runner.go:130] >         "value": "65535"
	I0821 10:52:37.270579   97516 command_runner.go:130] >       },
	I0821 10:52:37.270583   97516 command_runner.go:130] >       "username": "",
	I0821 10:52:37.270590   97516 command_runner.go:130] >       "spec": null,
	I0821 10:52:37.270594   97516 command_runner.go:130] >       "pinned": false
	I0821 10:52:37.270597   97516 command_runner.go:130] >     }
	I0821 10:52:37.270600   97516 command_runner.go:130] >   ]
	I0821 10:52:37.270605   97516 command_runner.go:130] > }
	I0821 10:52:37.272042   97516 crio.go:496] all images are preloaded for cri-o runtime.
	I0821 10:52:37.272061   97516 crio.go:415] Images already preloaded, skipping extraction
	I0821 10:52:37.272102   97516 ssh_runner.go:195] Run: sudo crictl images --output json
	I0821 10:52:37.302460   97516 command_runner.go:130] > {
	I0821 10:52:37.302484   97516 command_runner.go:130] >   "images": [
	I0821 10:52:37.302490   97516 command_runner.go:130] >     {
	I0821 10:52:37.302504   97516 command_runner.go:130] >       "id": "b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da",
	I0821 10:52:37.302511   97516 command_runner.go:130] >       "repoTags": [
	I0821 10:52:37.302520   97516 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0821 10:52:37.302529   97516 command_runner.go:130] >       ],
	I0821 10:52:37.302538   97516 command_runner.go:130] >       "repoDigests": [
	I0821 10:52:37.302546   97516 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974",
	I0821 10:52:37.302557   97516 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"
	I0821 10:52:37.302561   97516 command_runner.go:130] >       ],
	I0821 10:52:37.302566   97516 command_runner.go:130] >       "size": "65249302",
	I0821 10:52:37.302570   97516 command_runner.go:130] >       "uid": null,
	I0821 10:52:37.302574   97516 command_runner.go:130] >       "username": "",
	I0821 10:52:37.302581   97516 command_runner.go:130] >       "spec": null,
	I0821 10:52:37.302585   97516 command_runner.go:130] >       "pinned": false
	I0821 10:52:37.302589   97516 command_runner.go:130] >     },
	I0821 10:52:37.302592   97516 command_runner.go:130] >     {
	I0821 10:52:37.302599   97516 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0821 10:52:37.302605   97516 command_runner.go:130] >       "repoTags": [
	I0821 10:52:37.302610   97516 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0821 10:52:37.302616   97516 command_runner.go:130] >       ],
	I0821 10:52:37.302621   97516 command_runner.go:130] >       "repoDigests": [
	I0821 10:52:37.302628   97516 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0821 10:52:37.302634   97516 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0821 10:52:37.302638   97516 command_runner.go:130] >       ],
	I0821 10:52:37.302643   97516 command_runner.go:130] >       "size": "31470524",
	I0821 10:52:37.302647   97516 command_runner.go:130] >       "uid": null,
	I0821 10:52:37.302651   97516 command_runner.go:130] >       "username": "",
	I0821 10:52:37.302654   97516 command_runner.go:130] >       "spec": null,
	I0821 10:52:37.302658   97516 command_runner.go:130] >       "pinned": false
	I0821 10:52:37.302661   97516 command_runner.go:130] >     },
	I0821 10:52:37.302664   97516 command_runner.go:130] >     {
	I0821 10:52:37.302670   97516 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0821 10:52:37.302674   97516 command_runner.go:130] >       "repoTags": [
	I0821 10:52:37.302678   97516 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0821 10:52:37.302681   97516 command_runner.go:130] >       ],
	I0821 10:52:37.302685   97516 command_runner.go:130] >       "repoDigests": [
	I0821 10:52:37.302692   97516 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0821 10:52:37.302699   97516 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0821 10:52:37.302705   97516 command_runner.go:130] >       ],
	I0821 10:52:37.302709   97516 command_runner.go:130] >       "size": "53621675",
	I0821 10:52:37.302716   97516 command_runner.go:130] >       "uid": null,
	I0821 10:52:37.302720   97516 command_runner.go:130] >       "username": "",
	I0821 10:52:37.302724   97516 command_runner.go:130] >       "spec": null,
	I0821 10:52:37.302728   97516 command_runner.go:130] >       "pinned": false
	I0821 10:52:37.302732   97516 command_runner.go:130] >     },
	I0821 10:52:37.302738   97516 command_runner.go:130] >     {
	I0821 10:52:37.302744   97516 command_runner.go:130] >       "id": "86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681",
	I0821 10:52:37.302750   97516 command_runner.go:130] >       "repoTags": [
	I0821 10:52:37.302756   97516 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.7-0"
	I0821 10:52:37.302762   97516 command_runner.go:130] >       ],
	I0821 10:52:37.302766   97516 command_runner.go:130] >       "repoDigests": [
	I0821 10:52:37.302775   97516 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83",
	I0821 10:52:37.302783   97516 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9"
	I0821 10:52:37.302792   97516 command_runner.go:130] >       ],
	I0821 10:52:37.302800   97516 command_runner.go:130] >       "size": "297083935",
	I0821 10:52:37.302808   97516 command_runner.go:130] >       "uid": {
	I0821 10:52:37.302812   97516 command_runner.go:130] >         "value": "0"
	I0821 10:52:37.302818   97516 command_runner.go:130] >       },
	I0821 10:52:37.302822   97516 command_runner.go:130] >       "username": "",
	I0821 10:52:37.302829   97516 command_runner.go:130] >       "spec": null,
	I0821 10:52:37.302833   97516 command_runner.go:130] >       "pinned": false
	I0821 10:52:37.302838   97516 command_runner.go:130] >     },
	I0821 10:52:37.302842   97516 command_runner.go:130] >     {
	I0821 10:52:37.302850   97516 command_runner.go:130] >       "id": "e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c",
	I0821 10:52:37.302857   97516 command_runner.go:130] >       "repoTags": [
	I0821 10:52:37.302862   97516 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.27.4"
	I0821 10:52:37.302868   97516 command_runner.go:130] >       ],
	I0821 10:52:37.302872   97516 command_runner.go:130] >       "repoDigests": [
	I0821 10:52:37.302882   97516 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d",
	I0821 10:52:37.302891   97516 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:dcf39b4579f896291ec79bb2ef94ad2b51e2ad1846086df705b06dc3ae20c854"
	I0821 10:52:37.302897   97516 command_runner.go:130] >       ],
	I0821 10:52:37.302901   97516 command_runner.go:130] >       "size": "122078160",
	I0821 10:52:37.302904   97516 command_runner.go:130] >       "uid": {
	I0821 10:52:37.302911   97516 command_runner.go:130] >         "value": "0"
	I0821 10:52:37.302915   97516 command_runner.go:130] >       },
	I0821 10:52:37.302921   97516 command_runner.go:130] >       "username": "",
	I0821 10:52:37.302926   97516 command_runner.go:130] >       "spec": null,
	I0821 10:52:37.302932   97516 command_runner.go:130] >       "pinned": false
	I0821 10:52:37.302935   97516 command_runner.go:130] >     },
	I0821 10:52:37.302941   97516 command_runner.go:130] >     {
	I0821 10:52:37.302947   97516 command_runner.go:130] >       "id": "f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5",
	I0821 10:52:37.302953   97516 command_runner.go:130] >       "repoTags": [
	I0821 10:52:37.302959   97516 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.27.4"
	I0821 10:52:37.302965   97516 command_runner.go:130] >       ],
	I0821 10:52:37.302969   97516 command_runner.go:130] >       "repoDigests": [
	I0821 10:52:37.302979   97516 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265",
	I0821 10:52:37.302988   97516 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c4765f94930681526ac9179fc4e49b5254abcbfa33841af4602a52bc664f6934"
	I0821 10:52:37.302994   97516 command_runner.go:130] >       ],
	I0821 10:52:37.302998   97516 command_runner.go:130] >       "size": "113931062",
	I0821 10:52:37.303002   97516 command_runner.go:130] >       "uid": {
	I0821 10:52:37.303007   97516 command_runner.go:130] >         "value": "0"
	I0821 10:52:37.303010   97516 command_runner.go:130] >       },
	I0821 10:52:37.303017   97516 command_runner.go:130] >       "username": "",
	I0821 10:52:37.303020   97516 command_runner.go:130] >       "spec": null,
	I0821 10:52:37.303024   97516 command_runner.go:130] >       "pinned": false
	I0821 10:52:37.303030   97516 command_runner.go:130] >     },
	I0821 10:52:37.303034   97516 command_runner.go:130] >     {
	I0821 10:52:37.303042   97516 command_runner.go:130] >       "id": "6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4",
	I0821 10:52:37.303047   97516 command_runner.go:130] >       "repoTags": [
	I0821 10:52:37.303051   97516 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.27.4"
	I0821 10:52:37.303057   97516 command_runner.go:130] >       ],
	I0821 10:52:37.303061   97516 command_runner.go:130] >       "repoDigests": [
	I0821 10:52:37.303070   97516 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf",
	I0821 10:52:37.303079   97516 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:ce9abe867450f8962eb851670b5869219ca0c3376777d1e18d89f9abedbe10c3"
	I0821 10:52:37.303084   97516 command_runner.go:130] >       ],
	I0821 10:52:37.303088   97516 command_runner.go:130] >       "size": "72714135",
	I0821 10:52:37.303095   97516 command_runner.go:130] >       "uid": null,
	I0821 10:52:37.303099   97516 command_runner.go:130] >       "username": "",
	I0821 10:52:37.303105   97516 command_runner.go:130] >       "spec": null,
	I0821 10:52:37.303109   97516 command_runner.go:130] >       "pinned": false
	I0821 10:52:37.303115   97516 command_runner.go:130] >     },
	I0821 10:52:37.303118   97516 command_runner.go:130] >     {
	I0821 10:52:37.303127   97516 command_runner.go:130] >       "id": "98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16",
	I0821 10:52:37.303133   97516 command_runner.go:130] >       "repoTags": [
	I0821 10:52:37.303139   97516 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.27.4"
	I0821 10:52:37.303144   97516 command_runner.go:130] >       ],
	I0821 10:52:37.303149   97516 command_runner.go:130] >       "repoDigests": [
	I0821 10:52:37.303200   97516 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af",
	I0821 10:52:37.303211   97516 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:9c58009453cfcd7533721327269d2ef0af93d09f21812a5d584c375840117da7"
	I0821 10:52:37.303214   97516 command_runner.go:130] >       ],
	I0821 10:52:37.303221   97516 command_runner.go:130] >       "size": "59814710",
	I0821 10:52:37.303227   97516 command_runner.go:130] >       "uid": {
	I0821 10:52:37.303236   97516 command_runner.go:130] >         "value": "0"
	I0821 10:52:37.303244   97516 command_runner.go:130] >       },
	I0821 10:52:37.303257   97516 command_runner.go:130] >       "username": "",
	I0821 10:52:37.303267   97516 command_runner.go:130] >       "spec": null,
	I0821 10:52:37.303277   97516 command_runner.go:130] >       "pinned": false
	I0821 10:52:37.303284   97516 command_runner.go:130] >     },
	I0821 10:52:37.303288   97516 command_runner.go:130] >     {
	I0821 10:52:37.303295   97516 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0821 10:52:37.303305   97516 command_runner.go:130] >       "repoTags": [
	I0821 10:52:37.303311   97516 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0821 10:52:37.303315   97516 command_runner.go:130] >       ],
	I0821 10:52:37.303319   97516 command_runner.go:130] >       "repoDigests": [
	I0821 10:52:37.303328   97516 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0821 10:52:37.303337   97516 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0821 10:52:37.303343   97516 command_runner.go:130] >       ],
	I0821 10:52:37.303348   97516 command_runner.go:130] >       "size": "750414",
	I0821 10:52:37.303369   97516 command_runner.go:130] >       "uid": {
	I0821 10:52:37.303381   97516 command_runner.go:130] >         "value": "65535"
	I0821 10:52:37.303390   97516 command_runner.go:130] >       },
	I0821 10:52:37.303395   97516 command_runner.go:130] >       "username": "",
	I0821 10:52:37.303401   97516 command_runner.go:130] >       "spec": null,
	I0821 10:52:37.303405   97516 command_runner.go:130] >       "pinned": false
	I0821 10:52:37.303411   97516 command_runner.go:130] >     }
	I0821 10:52:37.303415   97516 command_runner.go:130] >   ]
	I0821 10:52:37.303420   97516 command_runner.go:130] > }
	I0821 10:52:37.304489   97516 crio.go:496] all images are preloaded for cri-o runtime.
	I0821 10:52:37.304510   97516 cache_images.go:84] Images are preloaded, skipping loading
	I0821 10:52:37.304572   97516 ssh_runner.go:195] Run: crio config
	I0821 10:52:37.341011   97516 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0821 10:52:37.341038   97516 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0821 10:52:37.341050   97516 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0821 10:52:37.341055   97516 command_runner.go:130] > #
	I0821 10:52:37.341068   97516 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0821 10:52:37.341077   97516 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0821 10:52:37.341083   97516 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0821 10:52:37.341104   97516 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0821 10:52:37.341119   97516 command_runner.go:130] > # reload'.
	I0821 10:52:37.341130   97516 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0821 10:52:37.341144   97516 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0821 10:52:37.341157   97516 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0821 10:52:37.341170   97516 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0821 10:52:37.341179   97516 command_runner.go:130] > [crio]
	I0821 10:52:37.341188   97516 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0821 10:52:37.341198   97516 command_runner.go:130] > # containers images, in this directory.
	I0821 10:52:37.341212   97516 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0821 10:52:37.341221   97516 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0821 10:52:37.341230   97516 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0821 10:52:37.341244   97516 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0821 10:52:37.341261   97516 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0821 10:52:37.341272   97516 command_runner.go:130] > # storage_driver = "vfs"
	I0821 10:52:37.341281   97516 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0821 10:52:37.341293   97516 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0821 10:52:37.341302   97516 command_runner.go:130] > # storage_option = [
	I0821 10:52:37.341307   97516 command_runner.go:130] > # ]
	I0821 10:52:37.341321   97516 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0821 10:52:37.341333   97516 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0821 10:52:37.341344   97516 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0821 10:52:37.341357   97516 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0821 10:52:37.341370   97516 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0821 10:52:37.341381   97516 command_runner.go:130] > # always happen on a node reboot
	I0821 10:52:37.341392   97516 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0821 10:52:37.341406   97516 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0821 10:52:37.341423   97516 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0821 10:52:37.341440   97516 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0821 10:52:37.341453   97516 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0821 10:52:37.341469   97516 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0821 10:52:37.341486   97516 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0821 10:52:37.341497   97516 command_runner.go:130] > # internal_wipe = true
	I0821 10:52:37.341507   97516 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0821 10:52:37.341528   97516 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0821 10:52:37.341542   97516 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0821 10:52:37.341552   97516 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0821 10:52:37.341573   97516 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0821 10:52:37.341580   97516 command_runner.go:130] > [crio.api]
	I0821 10:52:37.341588   97516 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0821 10:52:37.341600   97516 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0821 10:52:37.341610   97516 command_runner.go:130] > # IP address on which the stream server will listen.
	I0821 10:52:37.341621   97516 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0821 10:52:37.341637   97516 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0821 10:52:37.341649   97516 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0821 10:52:37.341660   97516 command_runner.go:130] > # stream_port = "0"
	I0821 10:52:37.341669   97516 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0821 10:52:37.341720   97516 command_runner.go:130] > # stream_enable_tls = false
	I0821 10:52:37.341737   97516 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0821 10:52:37.341745   97516 command_runner.go:130] > # stream_idle_timeout = ""
	I0821 10:52:37.341760   97516 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0821 10:52:37.341774   97516 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0821 10:52:37.341782   97516 command_runner.go:130] > # minutes.
	I0821 10:52:37.341788   97516 command_runner.go:130] > # stream_tls_cert = ""
	I0821 10:52:37.341801   97516 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0821 10:52:37.341814   97516 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0821 10:52:37.341827   97516 command_runner.go:130] > # stream_tls_key = ""
	I0821 10:52:37.341838   97516 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0821 10:52:37.341852   97516 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0821 10:52:37.341866   97516 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0821 10:52:37.341875   97516 command_runner.go:130] > # stream_tls_ca = ""
	I0821 10:52:37.341888   97516 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0821 10:52:37.341900   97516 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0821 10:52:37.341915   97516 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0821 10:52:37.341928   97516 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0821 10:52:37.341947   97516 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0821 10:52:37.341960   97516 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0821 10:52:37.341971   97516 command_runner.go:130] > [crio.runtime]
	I0821 10:52:37.341982   97516 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0821 10:52:37.341995   97516 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0821 10:52:37.342005   97516 command_runner.go:130] > # "nofile=1024:2048"
	I0821 10:52:37.342017   97516 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0821 10:52:37.342028   97516 command_runner.go:130] > # default_ulimits = [
	I0821 10:52:37.342036   97516 command_runner.go:130] > # ]
	I0821 10:52:37.342047   97516 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0821 10:52:37.342062   97516 command_runner.go:130] > # no_pivot = false
	I0821 10:52:37.342073   97516 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0821 10:52:37.342087   97516 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0821 10:52:37.342099   97516 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0821 10:52:37.342113   97516 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0821 10:52:37.342123   97516 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0821 10:52:37.342135   97516 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0821 10:52:37.342149   97516 command_runner.go:130] > # conmon = ""
	I0821 10:52:37.342157   97516 command_runner.go:130] > # Cgroup setting for conmon
	I0821 10:52:37.342168   97516 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0821 10:52:37.342175   97516 command_runner.go:130] > conmon_cgroup = "pod"
	I0821 10:52:37.342186   97516 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0821 10:52:37.342199   97516 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0821 10:52:37.342212   97516 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0821 10:52:37.342223   97516 command_runner.go:130] > # conmon_env = [
	I0821 10:52:37.342228   97516 command_runner.go:130] > # ]
	I0821 10:52:37.342244   97516 command_runner.go:130] > # Additional environment variables to set for all the
	I0821 10:52:37.342253   97516 command_runner.go:130] > # containers. These are overridden if set in the
	I0821 10:52:37.342264   97516 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0821 10:52:37.342274   97516 command_runner.go:130] > # default_env = [
	I0821 10:52:37.342279   97516 command_runner.go:130] > # ]
	I0821 10:52:37.342294   97516 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0821 10:52:37.342303   97516 command_runner.go:130] > # selinux = false
	I0821 10:52:37.342313   97516 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0821 10:52:37.342329   97516 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0821 10:52:37.342344   97516 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0821 10:52:37.342349   97516 command_runner.go:130] > # seccomp_profile = ""
	I0821 10:52:37.342357   97516 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0821 10:52:37.342365   97516 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0821 10:52:37.342376   97516 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0821 10:52:37.342385   97516 command_runner.go:130] > # which might increase security.
	I0821 10:52:37.342391   97516 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0821 10:52:37.342399   97516 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0821 10:52:37.342409   97516 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0821 10:52:37.342419   97516 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0821 10:52:37.342430   97516 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0821 10:52:37.342440   97516 command_runner.go:130] > # This option supports live configuration reload.
	I0821 10:52:37.342449   97516 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0821 10:52:37.342463   97516 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0821 10:52:37.342474   97516 command_runner.go:130] > # the cgroup blockio controller.
	I0821 10:52:37.342483   97516 command_runner.go:130] > # blockio_config_file = ""
	I0821 10:52:37.342497   97516 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0821 10:52:37.342506   97516 command_runner.go:130] > # irqbalance daemon.
	I0821 10:52:37.342511   97516 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0821 10:52:37.342524   97516 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0821 10:52:37.342532   97516 command_runner.go:130] > # This option supports live configuration reload.
	I0821 10:52:37.342538   97516 command_runner.go:130] > # rdt_config_file = ""
	I0821 10:52:37.342543   97516 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0821 10:52:37.342550   97516 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0821 10:52:37.342556   97516 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0821 10:52:37.342562   97516 command_runner.go:130] > # separate_pull_cgroup = ""
	I0821 10:52:37.342593   97516 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0821 10:52:37.342603   97516 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0821 10:52:37.342607   97516 command_runner.go:130] > # will be added.
	I0821 10:52:37.342614   97516 command_runner.go:130] > # default_capabilities = [
	I0821 10:52:37.342622   97516 command_runner.go:130] > # 	"CHOWN",
	I0821 10:52:37.342629   97516 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0821 10:52:37.342638   97516 command_runner.go:130] > # 	"FSETID",
	I0821 10:52:37.342645   97516 command_runner.go:130] > # 	"FOWNER",
	I0821 10:52:37.342654   97516 command_runner.go:130] > # 	"SETGID",
	I0821 10:52:37.342662   97516 command_runner.go:130] > # 	"SETUID",
	I0821 10:52:37.342671   97516 command_runner.go:130] > # 	"SETPCAP",
	I0821 10:52:37.342678   97516 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0821 10:52:37.342687   97516 command_runner.go:130] > # 	"KILL",
	I0821 10:52:37.342693   97516 command_runner.go:130] > # ]
	I0821 10:52:37.342705   97516 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0821 10:52:37.342719   97516 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0821 10:52:37.342731   97516 command_runner.go:130] > # add_inheritable_capabilities = true
	I0821 10:52:37.342742   97516 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0821 10:52:37.342756   97516 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0821 10:52:37.342766   97516 command_runner.go:130] > # default_sysctls = [
	I0821 10:52:37.342775   97516 command_runner.go:130] > # ]
	I0821 10:52:37.342786   97516 command_runner.go:130] > # List of devices on the host that a
	I0821 10:52:37.342799   97516 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0821 10:52:37.342806   97516 command_runner.go:130] > # allowed_devices = [
	I0821 10:52:37.342811   97516 command_runner.go:130] > # 	"/dev/fuse",
	I0821 10:52:37.342820   97516 command_runner.go:130] > # ]
	I0821 10:52:37.342831   97516 command_runner.go:130] > # List of additional devices. specified as
	I0821 10:52:37.342865   97516 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0821 10:52:37.342878   97516 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0821 10:52:37.342892   97516 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0821 10:52:37.342902   97516 command_runner.go:130] > # additional_devices = [
	I0821 10:52:37.342909   97516 command_runner.go:130] > # ]
	I0821 10:52:37.342921   97516 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0821 10:52:37.342928   97516 command_runner.go:130] > # cdi_spec_dirs = [
	I0821 10:52:37.342938   97516 command_runner.go:130] > # 	"/etc/cdi",
	I0821 10:52:37.342947   97516 command_runner.go:130] > # 	"/var/run/cdi",
	I0821 10:52:37.342956   97516 command_runner.go:130] > # ]
	I0821 10:52:37.342972   97516 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0821 10:52:37.342985   97516 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0821 10:52:37.342993   97516 command_runner.go:130] > # Defaults to false.
	I0821 10:52:37.343002   97516 command_runner.go:130] > # device_ownership_from_security_context = false
	I0821 10:52:37.343015   97516 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0821 10:52:37.343029   97516 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0821 10:52:37.343039   97516 command_runner.go:130] > # hooks_dir = [
	I0821 10:52:37.343051   97516 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0821 10:52:37.343060   97516 command_runner.go:130] > # ]
	I0821 10:52:37.343074   97516 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0821 10:52:37.343088   97516 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0821 10:52:37.343096   97516 command_runner.go:130] > # its default mounts from the following two files:
	I0821 10:52:37.343103   97516 command_runner.go:130] > #
	I0821 10:52:37.343114   97516 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0821 10:52:37.343129   97516 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0821 10:52:37.343142   97516 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0821 10:52:37.343151   97516 command_runner.go:130] > #
	I0821 10:52:37.343164   97516 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0821 10:52:37.343178   97516 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0821 10:52:37.343191   97516 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0821 10:52:37.343199   97516 command_runner.go:130] > #      only add mounts it finds in this file.
	I0821 10:52:37.343207   97516 command_runner.go:130] > #
	I0821 10:52:37.343218   97516 command_runner.go:130] > # default_mounts_file = ""
	I0821 10:52:37.343230   97516 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0821 10:52:37.343246   97516 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0821 10:52:37.343255   97516 command_runner.go:130] > # pids_limit = 0
	I0821 10:52:37.343269   97516 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0821 10:52:37.343279   97516 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0821 10:52:37.343289   97516 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0821 10:52:37.343305   97516 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0821 10:52:37.343316   97516 command_runner.go:130] > # log_size_max = -1
	I0821 10:52:37.343327   97516 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0821 10:52:37.343341   97516 command_runner.go:130] > # log_to_journald = false
	I0821 10:52:37.343365   97516 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0821 10:52:37.343377   97516 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0821 10:52:37.343386   97516 command_runner.go:130] > # Path to directory for container attach sockets.
	I0821 10:52:37.343398   97516 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0821 10:52:37.343410   97516 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0821 10:52:37.343421   97516 command_runner.go:130] > # bind_mount_prefix = ""
	I0821 10:52:37.343433   97516 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0821 10:52:37.343443   97516 command_runner.go:130] > # read_only = false
	I0821 10:52:37.343452   97516 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0821 10:52:37.343465   97516 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0821 10:52:37.343479   97516 command_runner.go:130] > # live configuration reload.
	I0821 10:52:37.343491   97516 command_runner.go:130] > # log_level = "info"
	I0821 10:52:37.343504   97516 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0821 10:52:37.343520   97516 command_runner.go:130] > # This option supports live configuration reload.
	I0821 10:52:37.343529   97516 command_runner.go:130] > # log_filter = ""
	I0821 10:52:37.343541   97516 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0821 10:52:37.343550   97516 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0821 10:52:37.343559   97516 command_runner.go:130] > # separated by comma.
	I0821 10:52:37.343569   97516 command_runner.go:130] > # uid_mappings = ""
	I0821 10:52:37.343607   97516 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0821 10:52:37.343621   97516 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0821 10:52:37.343630   97516 command_runner.go:130] > # separated by comma.
	I0821 10:52:37.343635   97516 command_runner.go:130] > # gid_mappings = ""
	I0821 10:52:37.343644   97516 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0821 10:52:37.343658   97516 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0821 10:52:37.343672   97516 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0821 10:52:37.343682   97516 command_runner.go:130] > # minimum_mappable_uid = -1
	I0821 10:52:37.343697   97516 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0821 10:52:37.343710   97516 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0821 10:52:37.343720   97516 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0821 10:52:37.343727   97516 command_runner.go:130] > # minimum_mappable_gid = -1
	I0821 10:52:37.343741   97516 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0821 10:52:37.343756   97516 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0821 10:52:37.343769   97516 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0821 10:52:37.343779   97516 command_runner.go:130] > # ctr_stop_timeout = 30
	I0821 10:52:37.343792   97516 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0821 10:52:37.343804   97516 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0821 10:52:37.343821   97516 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0821 10:52:37.343829   97516 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0821 10:52:37.343840   97516 command_runner.go:130] > # drop_infra_ctr = true
	I0821 10:52:37.343854   97516 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0821 10:52:37.343868   97516 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0821 10:52:37.343883   97516 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0821 10:52:37.343893   97516 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0821 10:52:37.343906   97516 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0821 10:52:37.343915   97516 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0821 10:52:37.343920   97516 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0821 10:52:37.343935   97516 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0821 10:52:37.343946   97516 command_runner.go:130] > # pinns_path = ""
	I0821 10:52:37.343958   97516 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0821 10:52:37.343973   97516 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0821 10:52:37.343987   97516 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0821 10:52:37.343997   97516 command_runner.go:130] > # default_runtime = "runc"
	I0821 10:52:37.344009   97516 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0821 10:52:37.344019   97516 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0821 10:52:37.344037   97516 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0821 10:52:37.344049   97516 command_runner.go:130] > # creation as a file is not desired either.
	I0821 10:52:37.344067   97516 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0821 10:52:37.344078   97516 command_runner.go:130] > # the hostname is being managed dynamically.
	I0821 10:52:37.344089   97516 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0821 10:52:37.344098   97516 command_runner.go:130] > # ]
	I0821 10:52:37.344111   97516 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0821 10:52:37.344120   97516 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0821 10:52:37.344134   97516 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0821 10:52:37.344148   97516 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0821 10:52:37.344157   97516 command_runner.go:130] > #
	I0821 10:52:37.344168   97516 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0821 10:52:37.344179   97516 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0821 10:52:37.344189   97516 command_runner.go:130] > #  runtime_type = "oci"
	I0821 10:52:37.344200   97516 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0821 10:52:37.344210   97516 command_runner.go:130] > #  privileged_without_host_devices = false
	I0821 10:52:37.344218   97516 command_runner.go:130] > #  allowed_annotations = []
	I0821 10:52:37.344223   97516 command_runner.go:130] > # Where:
	I0821 10:52:37.344235   97516 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0821 10:52:37.344252   97516 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0821 10:52:37.344267   97516 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0821 10:52:37.344280   97516 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0821 10:52:37.344290   97516 command_runner.go:130] > #   in $PATH.
	I0821 10:52:37.344303   97516 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0821 10:52:37.344313   97516 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0821 10:52:37.344322   97516 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0821 10:52:37.344331   97516 command_runner.go:130] > #   state.
	I0821 10:52:37.344344   97516 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0821 10:52:37.344360   97516 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0821 10:52:37.344373   97516 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0821 10:52:37.344386   97516 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0821 10:52:37.344400   97516 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0821 10:52:37.344409   97516 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0821 10:52:37.344419   97516 command_runner.go:130] > #   The currently recognized values are:
	I0821 10:52:37.344454   97516 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0821 10:52:37.344470   97516 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0821 10:52:37.344483   97516 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0821 10:52:37.344493   97516 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0821 10:52:37.344507   97516 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0821 10:52:37.344527   97516 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0821 10:52:37.344540   97516 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0821 10:52:37.344555   97516 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0821 10:52:37.344566   97516 command_runner.go:130] > #   should be moved to the container's cgroup
	I0821 10:52:37.344576   97516 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0821 10:52:37.344586   97516 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0821 10:52:37.344593   97516 command_runner.go:130] > runtime_type = "oci"
	I0821 10:52:37.344598   97516 command_runner.go:130] > runtime_root = "/run/runc"
	I0821 10:52:37.344608   97516 command_runner.go:130] > runtime_config_path = ""
	I0821 10:52:37.344618   97516 command_runner.go:130] > monitor_path = ""
	I0821 10:52:37.344625   97516 command_runner.go:130] > monitor_cgroup = ""
	I0821 10:52:37.344636   97516 command_runner.go:130] > monitor_exec_cgroup = ""
	I0821 10:52:37.344701   97516 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0821 10:52:37.344715   97516 command_runner.go:130] > # running containers
	I0821 10:52:37.344724   97516 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0821 10:52:37.344738   97516 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0821 10:52:37.344756   97516 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0821 10:52:37.344769   97516 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0821 10:52:37.344780   97516 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0821 10:52:37.344788   97516 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0821 10:52:37.344797   97516 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0821 10:52:37.344808   97516 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0821 10:52:37.344820   97516 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0821 10:52:37.344830   97516 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0821 10:52:37.344844   97516 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0821 10:52:37.344858   97516 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0821 10:52:37.344869   97516 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0821 10:52:37.344883   97516 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0821 10:52:37.344900   97516 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0821 10:52:37.344913   97516 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0821 10:52:37.344933   97516 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0821 10:52:37.344949   97516 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0821 10:52:37.344959   97516 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0821 10:52:37.344970   97516 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0821 10:52:37.344980   97516 command_runner.go:130] > # Example:
	I0821 10:52:37.344991   97516 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0821 10:52:37.345003   97516 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0821 10:52:37.345014   97516 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0821 10:52:37.345026   97516 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0821 10:52:37.345035   97516 command_runner.go:130] > # cpuset = 0
	I0821 10:52:37.345045   97516 command_runner.go:130] > # cpushares = "0-1"
	I0821 10:52:37.345053   97516 command_runner.go:130] > # Where:
	I0821 10:52:37.345061   97516 command_runner.go:130] > # The workload name is workload-type.
	I0821 10:52:37.345076   97516 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0821 10:52:37.345089   97516 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0821 10:52:37.345102   97516 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0821 10:52:37.345120   97516 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0821 10:52:37.345132   97516 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0821 10:52:37.345141   97516 command_runner.go:130] > # 
	I0821 10:52:37.345153   97516 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0821 10:52:37.345159   97516 command_runner.go:130] > #
	I0821 10:52:37.345172   97516 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0821 10:52:37.345187   97516 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0821 10:52:37.345202   97516 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0821 10:52:37.345216   97516 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0821 10:52:37.345228   97516 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0821 10:52:37.345238   97516 command_runner.go:130] > [crio.image]
	I0821 10:52:37.345250   97516 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0821 10:52:37.345257   97516 command_runner.go:130] > # default_transport = "docker://"
	I0821 10:52:37.345268   97516 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0821 10:52:37.345282   97516 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0821 10:52:37.345294   97516 command_runner.go:130] > # global_auth_file = ""
	I0821 10:52:37.345305   97516 command_runner.go:130] > # The image used to instantiate infra containers.
	I0821 10:52:37.345317   97516 command_runner.go:130] > # This option supports live configuration reload.
	I0821 10:52:37.345329   97516 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0821 10:52:37.345380   97516 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0821 10:52:37.345396   97516 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0821 10:52:37.345407   97516 command_runner.go:130] > # This option supports live configuration reload.
	I0821 10:52:37.345418   97516 command_runner.go:130] > # pause_image_auth_file = ""
	I0821 10:52:37.345428   97516 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0821 10:52:37.345442   97516 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0821 10:52:37.345455   97516 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0821 10:52:37.345465   97516 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0821 10:52:37.345474   97516 command_runner.go:130] > # pause_command = "/pause"
	I0821 10:52:37.345488   97516 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0821 10:52:37.345502   97516 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0821 10:52:37.345522   97516 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0821 10:52:37.345536   97516 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0821 10:52:37.345548   97516 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0821 10:52:37.345559   97516 command_runner.go:130] > # signature_policy = ""
	I0821 10:52:37.345569   97516 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0821 10:52:37.345580   97516 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0821 10:52:37.345591   97516 command_runner.go:130] > # changing them here.
	I0821 10:52:37.345602   97516 command_runner.go:130] > # insecure_registries = [
	I0821 10:52:37.345610   97516 command_runner.go:130] > # ]
	I0821 10:52:37.345625   97516 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0821 10:52:37.345636   97516 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0821 10:52:37.345651   97516 command_runner.go:130] > # image_volumes = "mkdir"
	I0821 10:52:37.345659   97516 command_runner.go:130] > # Temporary directory to use for storing big files
	I0821 10:52:37.345668   97516 command_runner.go:130] > # big_files_temporary_dir = ""
	I0821 10:52:37.345682   97516 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0821 10:52:37.345693   97516 command_runner.go:130] > # CNI plugins.
	I0821 10:52:37.345702   97516 command_runner.go:130] > [crio.network]
	I0821 10:52:37.345716   97516 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0821 10:52:37.345725   97516 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0821 10:52:37.345735   97516 command_runner.go:130] > # cni_default_network = ""
	I0821 10:52:37.345745   97516 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0821 10:52:37.345759   97516 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0821 10:52:37.345773   97516 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0821 10:52:37.345783   97516 command_runner.go:130] > # plugin_dirs = [
	I0821 10:52:37.345793   97516 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0821 10:52:37.345801   97516 command_runner.go:130] > # ]
	I0821 10:52:37.345814   97516 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0821 10:52:37.345823   97516 command_runner.go:130] > [crio.metrics]
	I0821 10:52:37.345834   97516 command_runner.go:130] > # Globally enable or disable metrics support.
	I0821 10:52:37.345841   97516 command_runner.go:130] > # enable_metrics = false
	I0821 10:52:37.345847   97516 command_runner.go:130] > # Specify enabled metrics collectors.
	I0821 10:52:37.345858   97516 command_runner.go:130] > # Per default all metrics are enabled.
	I0821 10:52:37.345872   97516 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0821 10:52:37.345886   97516 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0821 10:52:37.345900   97516 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0821 10:52:37.345909   97516 command_runner.go:130] > # metrics_collectors = [
	I0821 10:52:37.345919   97516 command_runner.go:130] > # 	"operations",
	I0821 10:52:37.345930   97516 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0821 10:52:37.345938   97516 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0821 10:52:37.345945   97516 command_runner.go:130] > # 	"operations_errors",
	I0821 10:52:37.345955   97516 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0821 10:52:37.345967   97516 command_runner.go:130] > # 	"image_pulls_by_name",
	I0821 10:52:37.345975   97516 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0821 10:52:37.345985   97516 command_runner.go:130] > # 	"image_pulls_failures",
	I0821 10:52:37.345995   97516 command_runner.go:130] > # 	"image_pulls_successes",
	I0821 10:52:37.346005   97516 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0821 10:52:37.346016   97516 command_runner.go:130] > # 	"image_layer_reuse",
	I0821 10:52:37.346026   97516 command_runner.go:130] > # 	"containers_oom_total",
	I0821 10:52:37.346034   97516 command_runner.go:130] > # 	"containers_oom",
	I0821 10:52:37.346039   97516 command_runner.go:130] > # 	"processes_defunct",
	I0821 10:52:37.346044   97516 command_runner.go:130] > # 	"operations_total",
	I0821 10:52:37.346054   97516 command_runner.go:130] > # 	"operations_latency_seconds",
	I0821 10:52:37.346065   97516 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0821 10:52:37.346074   97516 command_runner.go:130] > # 	"operations_errors_total",
	I0821 10:52:37.346085   97516 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0821 10:52:37.346096   97516 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0821 10:52:37.346108   97516 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0821 10:52:37.346122   97516 command_runner.go:130] > # 	"image_pulls_success_total",
	I0821 10:52:37.346131   97516 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0821 10:52:37.346139   97516 command_runner.go:130] > # 	"containers_oom_count_total",
	I0821 10:52:37.346144   97516 command_runner.go:130] > # ]
	I0821 10:52:37.346156   97516 command_runner.go:130] > # The port on which the metrics server will listen.
	I0821 10:52:37.346166   97516 command_runner.go:130] > # metrics_port = 9090
	I0821 10:52:37.346175   97516 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0821 10:52:37.346185   97516 command_runner.go:130] > # metrics_socket = ""
	I0821 10:52:37.346197   97516 command_runner.go:130] > # The certificate for the secure metrics server.
	I0821 10:52:37.346210   97516 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0821 10:52:37.346224   97516 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0821 10:52:37.346233   97516 command_runner.go:130] > # certificate on any modification event.
	I0821 10:52:37.346240   97516 command_runner.go:130] > # metrics_cert = ""
	I0821 10:52:37.346248   97516 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0821 10:52:37.346260   97516 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0821 10:52:37.346270   97516 command_runner.go:130] > # metrics_key = ""
	I0821 10:52:37.346283   97516 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0821 10:52:37.346292   97516 command_runner.go:130] > [crio.tracing]
	I0821 10:52:37.346308   97516 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0821 10:52:37.346317   97516 command_runner.go:130] > # enable_tracing = false
	I0821 10:52:37.346326   97516 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0821 10:52:37.346336   97516 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0821 10:52:37.346348   97516 command_runner.go:130] > # Number of samples to collect per million spans.
	I0821 10:52:37.346359   97516 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0821 10:52:37.346373   97516 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0821 10:52:37.346382   97516 command_runner.go:130] > [crio.stats]
	I0821 10:52:37.346395   97516 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0821 10:52:37.346429   97516 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0821 10:52:37.346440   97516 command_runner.go:130] > # stats_collection_period = 0
	I0821 10:52:37.347950   97516 command_runner.go:130] ! time="2023-08-21 10:52:37.338919018Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0821 10:52:37.347970   97516 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0821 10:52:37.348051   97516 cni.go:84] Creating CNI manager for ""
	I0821 10:52:37.348063   97516 cni.go:136] 1 nodes found, recommending kindnet
	I0821 10:52:37.348078   97516 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0821 10:52:37.348098   97516 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-200985 NodeName:multinode-200985 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0821 10:52:37.348232   97516 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-200985"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0821 10:52:37.348303   97516 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-200985 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:multinode-200985 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0821 10:52:37.348350   97516 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0821 10:52:37.355516   97516 command_runner.go:130] > kubeadm
	I0821 10:52:37.355532   97516 command_runner.go:130] > kubectl
	I0821 10:52:37.355538   97516 command_runner.go:130] > kubelet
	I0821 10:52:37.356152   97516 binaries.go:44] Found k8s binaries, skipping transfer
	I0821 10:52:37.356230   97516 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0821 10:52:37.363572   97516 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I0821 10:52:37.378243   97516 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0821 10:52:37.393409   97516 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0821 10:52:37.408283   97516 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0821 10:52:37.411177   97516 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0821 10:52:37.420587   97516 certs.go:56] Setting up /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985 for IP: 192.168.58.2
	I0821 10:52:37.420625   97516 certs.go:190] acquiring lock for shared ca certs: {Name:mkb88db7eb1befc1f1b3279575458c71b2313cc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 10:52:37.420770   97516 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17102-5717/.minikube/ca.key
	I0821 10:52:37.420808   97516 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17102-5717/.minikube/proxy-client-ca.key
	I0821 10:52:37.420843   97516 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/client.key
	I0821 10:52:37.420852   97516 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/client.crt with IP's: []
	I0821 10:52:37.703908   97516 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/client.crt ...
	I0821 10:52:37.703939   97516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/client.crt: {Name:mk9960f72d678a4f04dc532406dc30e0449bead6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 10:52:37.704114   97516 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/client.key ...
	I0821 10:52:37.704124   97516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/client.key: {Name:mk4e664492bc48d60c1fa4393385147ad4c5d722 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 10:52:37.704194   97516 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/apiserver.key.cee25041
	I0821 10:52:37.704207   97516 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0821 10:52:37.792814   97516 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/apiserver.crt.cee25041 ...
	I0821 10:52:37.792843   97516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/apiserver.crt.cee25041: {Name:mkbf8f56ce98d1907e34793cf00fa0e58982b570 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 10:52:37.792994   97516 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/apiserver.key.cee25041 ...
	I0821 10:52:37.793006   97516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/apiserver.key.cee25041: {Name:mkac59f13553756b26ff1383e8c37dae6a20fd46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 10:52:37.793066   97516 certs.go:337] copying /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/apiserver.crt
	I0821 10:52:37.793130   97516 certs.go:341] copying /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/apiserver.key
	I0821 10:52:37.793175   97516 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/proxy-client.key
	I0821 10:52:37.793187   97516 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/proxy-client.crt with IP's: []
	I0821 10:52:37.857423   97516 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/proxy-client.crt ...
	I0821 10:52:37.857451   97516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/proxy-client.crt: {Name:mk81effa1ae8d7696482fbe56a72c661cbf6a8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 10:52:37.857601   97516 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/proxy-client.key ...
	I0821 10:52:37.857611   97516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/proxy-client.key: {Name:mk6bb433259130eb0dd4c28dadae1679a1b8e3db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 10:52:37.857670   97516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0821 10:52:37.857687   97516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0821 10:52:37.857697   97516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0821 10:52:37.857711   97516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0821 10:52:37.857722   97516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0821 10:52:37.857731   97516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0821 10:52:37.857744   97516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0821 10:52:37.857757   97516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0821 10:52:37.857801   97516 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/home/jenkins/minikube-integration/17102-5717/.minikube/certs/12460.pem (1338 bytes)
	W0821 10:52:37.857839   97516 certs.go:433] ignoring /home/jenkins/minikube-integration/17102-5717/.minikube/certs/home/jenkins/minikube-integration/17102-5717/.minikube/certs/12460_empty.pem, impossibly tiny 0 bytes
	I0821 10:52:37.857850   97516 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca-key.pem (1679 bytes)
	I0821 10:52:37.857877   97516 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem (1078 bytes)
	I0821 10:52:37.857899   97516 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/home/jenkins/minikube-integration/17102-5717/.minikube/certs/cert.pem (1123 bytes)
	I0821 10:52:37.857929   97516 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/home/jenkins/minikube-integration/17102-5717/.minikube/certs/key.pem (1675 bytes)
	I0821 10:52:37.857968   97516 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-5717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17102-5717/.minikube/files/etc/ssl/certs/124602.pem (1708 bytes)
	I0821 10:52:37.857996   97516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/12460.pem -> /usr/share/ca-certificates/12460.pem
	I0821 10:52:37.858010   97516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/files/etc/ssl/certs/124602.pem -> /usr/share/ca-certificates/124602.pem
	I0821 10:52:37.858021   97516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0821 10:52:37.858537   97516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0821 10:52:37.879671   97516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0821 10:52:37.899459   97516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0821 10:52:37.919482   97516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0821 10:52:37.938833   97516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0821 10:52:37.958243   97516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0821 10:52:37.977793   97516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0821 10:52:37.997166   97516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0821 10:52:38.016389   97516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/certs/12460.pem --> /usr/share/ca-certificates/12460.pem (1338 bytes)
	I0821 10:52:38.036217   97516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/files/etc/ssl/certs/124602.pem --> /usr/share/ca-certificates/124602.pem (1708 bytes)
	I0821 10:52:38.057535   97516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0821 10:52:38.077222   97516 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0821 10:52:38.092101   97516 ssh_runner.go:195] Run: openssl version
	I0821 10:52:38.096601   97516 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0821 10:52:38.096795   97516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/124602.pem && ln -fs /usr/share/ca-certificates/124602.pem /etc/ssl/certs/124602.pem"
	I0821 10:52:38.104449   97516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/124602.pem
	I0821 10:52:38.107186   97516 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 21 10:39 /usr/share/ca-certificates/124602.pem
	I0821 10:52:38.107207   97516 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 21 10:39 /usr/share/ca-certificates/124602.pem
	I0821 10:52:38.107235   97516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/124602.pem
	I0821 10:52:38.112959   97516 command_runner.go:130] > 3ec20f2e
	I0821 10:52:38.113093   97516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/124602.pem /etc/ssl/certs/3ec20f2e.0"
	I0821 10:52:38.120998   97516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0821 10:52:38.128774   97516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0821 10:52:38.131720   97516 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 21 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0821 10:52:38.131747   97516 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 21 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0821 10:52:38.131787   97516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0821 10:52:38.137547   97516 command_runner.go:130] > b5213941
	I0821 10:52:38.137607   97516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0821 10:52:38.145307   97516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12460.pem && ln -fs /usr/share/ca-certificates/12460.pem /etc/ssl/certs/12460.pem"
	I0821 10:52:38.152995   97516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12460.pem
	I0821 10:52:38.155821   97516 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 21 10:39 /usr/share/ca-certificates/12460.pem
	I0821 10:52:38.155840   97516 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 21 10:39 /usr/share/ca-certificates/12460.pem
	I0821 10:52:38.155869   97516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12460.pem
	I0821 10:52:38.161635   97516 command_runner.go:130] > 51391683
	I0821 10:52:38.161689   97516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12460.pem /etc/ssl/certs/51391683.0"
	I0821 10:52:38.169477   97516 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0821 10:52:38.172331   97516 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0821 10:52:38.172374   97516 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0821 10:52:38.172406   97516 kubeadm.go:404] StartCluster: {Name:multinode-200985 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-200985 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 10:52:38.172479   97516 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0821 10:52:38.172513   97516 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0821 10:52:38.205299   97516 cri.go:89] found id: ""
	I0821 10:52:38.205360   97516 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0821 10:52:38.212365   97516 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0821 10:52:38.212395   97516 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0821 10:52:38.212405   97516 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0821 10:52:38.213003   97516 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0821 10:52:38.220714   97516 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0821 10:52:38.220802   97516 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0821 10:52:38.228130   97516 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0821 10:52:38.228157   97516 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0821 10:52:38.228170   97516 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0821 10:52:38.228183   97516 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0821 10:52:38.228224   97516 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0821 10:52:38.228268   97516 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0821 10:52:38.271156   97516 kubeadm.go:322] [init] Using Kubernetes version: v1.27.4
	I0821 10:52:38.271180   97516 command_runner.go:130] > [init] Using Kubernetes version: v1.27.4
	I0821 10:52:38.271215   97516 kubeadm.go:322] [preflight] Running pre-flight checks
	I0821 10:52:38.271222   97516 command_runner.go:130] > [preflight] Running pre-flight checks
	I0821 10:52:38.304811   97516 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0821 10:52:38.304839   97516 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0821 10:52:38.304949   97516 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1039-gcp
	I0821 10:52:38.304961   97516 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1039-gcp
	I0821 10:52:38.305011   97516 kubeadm.go:322] OS: Linux
	I0821 10:52:38.305023   97516 command_runner.go:130] > OS: Linux
	I0821 10:52:38.305076   97516 kubeadm.go:322] CGROUPS_CPU: enabled
	I0821 10:52:38.305098   97516 command_runner.go:130] > CGROUPS_CPU: enabled
	I0821 10:52:38.305173   97516 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0821 10:52:38.305184   97516 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0821 10:52:38.305248   97516 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0821 10:52:38.305258   97516 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0821 10:52:38.305322   97516 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0821 10:52:38.305333   97516 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0821 10:52:38.305408   97516 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0821 10:52:38.305419   97516 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0821 10:52:38.305477   97516 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0821 10:52:38.305485   97516 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0821 10:52:38.305520   97516 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0821 10:52:38.305527   97516 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0821 10:52:38.305564   97516 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0821 10:52:38.305571   97516 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0821 10:52:38.305607   97516 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0821 10:52:38.305624   97516 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0821 10:52:38.365198   97516 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0821 10:52:38.365221   97516 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0821 10:52:38.365317   97516 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0821 10:52:38.365328   97516 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0821 10:52:38.365467   97516 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0821 10:52:38.365490   97516 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0821 10:52:38.549217   97516 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0821 10:52:38.552768   97516 out.go:204]   - Generating certificates and keys ...
	I0821 10:52:38.549335   97516 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0821 10:52:38.552933   97516 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0821 10:52:38.552947   97516 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0821 10:52:38.553031   97516 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0821 10:52:38.553043   97516 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0821 10:52:38.727327   97516 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0821 10:52:38.727382   97516 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0821 10:52:38.835604   97516 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0821 10:52:38.835632   97516 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0821 10:52:39.014955   97516 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0821 10:52:39.014981   97516 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0821 10:52:39.255574   97516 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0821 10:52:39.255602   97516 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0821 10:52:39.602438   97516 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0821 10:52:39.602477   97516 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0821 10:52:39.602619   97516 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-200985] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0821 10:52:39.602632   97516 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-200985] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0821 10:52:39.923845   97516 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0821 10:52:39.923870   97516 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0821 10:52:39.924017   97516 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-200985] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0821 10:52:39.924026   97516 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-200985] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0821 10:52:40.113341   97516 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0821 10:52:40.113382   97516 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0821 10:52:40.281118   97516 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0821 10:52:40.281146   97516 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0821 10:52:40.378249   97516 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0821 10:52:40.378280   97516 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0821 10:52:40.378388   97516 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0821 10:52:40.378398   97516 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0821 10:52:40.516321   97516 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0821 10:52:40.516347   97516 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0821 10:52:40.613978   97516 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0821 10:52:40.614017   97516 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0821 10:52:40.696773   97516 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0821 10:52:40.696822   97516 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0821 10:52:40.893785   97516 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0821 10:52:40.893808   97516 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0821 10:52:40.901562   97516 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0821 10:52:40.901584   97516 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0821 10:52:40.902370   97516 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0821 10:52:40.902392   97516 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0821 10:52:40.902437   97516 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0821 10:52:40.902446   97516 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0821 10:52:40.972000   97516 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0821 10:52:40.974399   97516 out.go:204]   - Booting up control plane ...
	I0821 10:52:40.972097   97516 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0821 10:52:40.974524   97516 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0821 10:52:40.974544   97516 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0821 10:52:40.975616   97516 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0821 10:52:40.975636   97516 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0821 10:52:40.977016   97516 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0821 10:52:40.977041   97516 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0821 10:52:40.978028   97516 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0821 10:52:40.978047   97516 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0821 10:52:40.980900   97516 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0821 10:52:40.980917   97516 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0821 10:52:45.983070   97516 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002089 seconds
	I0821 10:52:45.983106   97516 command_runner.go:130] > [apiclient] All control plane components are healthy after 5.002089 seconds
	I0821 10:52:45.983281   97516 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0821 10:52:45.983305   97516 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0821 10:52:45.995258   97516 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0821 10:52:45.995279   97516 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0821 10:52:46.513854   97516 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0821 10:52:46.513880   97516 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0821 10:52:46.514041   97516 kubeadm.go:322] [mark-control-plane] Marking the node multinode-200985 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0821 10:52:46.514052   97516 command_runner.go:130] > [mark-control-plane] Marking the node multinode-200985 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0821 10:52:47.023409   97516 kubeadm.go:322] [bootstrap-token] Using token: xy5may.upnd61x7ht13519t
	I0821 10:52:47.024872   97516 out.go:204]   - Configuring RBAC rules ...
	I0821 10:52:47.023488   97516 command_runner.go:130] > [bootstrap-token] Using token: xy5may.upnd61x7ht13519t
	I0821 10:52:47.025024   97516 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0821 10:52:47.025033   97516 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0821 10:52:47.028518   97516 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0821 10:52:47.028557   97516 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0821 10:52:47.034298   97516 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0821 10:52:47.034317   97516 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0821 10:52:47.037748   97516 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0821 10:52:47.037768   97516 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0821 10:52:47.040322   97516 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0821 10:52:47.040337   97516 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0821 10:52:47.043882   97516 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0821 10:52:47.043899   97516 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0821 10:52:47.053562   97516 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0821 10:52:47.053588   97516 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0821 10:52:47.275023   97516 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0821 10:52:47.275052   97516 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0821 10:52:47.441072   97516 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0821 10:52:47.441101   97516 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0821 10:52:47.442035   97516 kubeadm.go:322] 
	I0821 10:52:47.442119   97516 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0821 10:52:47.442134   97516 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0821 10:52:47.442143   97516 kubeadm.go:322] 
	I0821 10:52:47.442243   97516 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0821 10:52:47.442254   97516 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0821 10:52:47.442257   97516 kubeadm.go:322] 
	I0821 10:52:47.442282   97516 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0821 10:52:47.442298   97516 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0821 10:52:47.442385   97516 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0821 10:52:47.442417   97516 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0821 10:52:47.442508   97516 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0821 10:52:47.442521   97516 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0821 10:52:47.442528   97516 kubeadm.go:322] 
	I0821 10:52:47.442616   97516 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0821 10:52:47.442630   97516 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0821 10:52:47.442636   97516 kubeadm.go:322] 
	I0821 10:52:47.442714   97516 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0821 10:52:47.442741   97516 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0821 10:52:47.442761   97516 kubeadm.go:322] 
	I0821 10:52:47.442841   97516 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0821 10:52:47.442850   97516 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0821 10:52:47.442981   97516 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0821 10:52:47.442997   97516 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0821 10:52:47.443081   97516 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0821 10:52:47.443092   97516 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0821 10:52:47.443097   97516 kubeadm.go:322] 
	I0821 10:52:47.443211   97516 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0821 10:52:47.443234   97516 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0821 10:52:47.443322   97516 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0821 10:52:47.443330   97516 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0821 10:52:47.443335   97516 kubeadm.go:322] 
	I0821 10:52:47.443465   97516 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token xy5may.upnd61x7ht13519t \
	I0821 10:52:47.443475   97516 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token xy5may.upnd61x7ht13519t \
	I0821 10:52:47.443611   97516 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a6ae141b3a3795878aa14999e04688399a9a305fa66151b732d0ee2f32cf9691 \
	I0821 10:52:47.443629   97516 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a6ae141b3a3795878aa14999e04688399a9a305fa66151b732d0ee2f32cf9691 \
	I0821 10:52:47.443655   97516 kubeadm.go:322] 	--control-plane 
	I0821 10:52:47.443666   97516 command_runner.go:130] > 	--control-plane 
	I0821 10:52:47.443676   97516 kubeadm.go:322] 
	I0821 10:52:47.443791   97516 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0821 10:52:47.443804   97516 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0821 10:52:47.443809   97516 kubeadm.go:322] 
	I0821 10:52:47.443937   97516 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token xy5may.upnd61x7ht13519t \
	I0821 10:52:47.443957   97516 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token xy5may.upnd61x7ht13519t \
	I0821 10:52:47.444108   97516 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a6ae141b3a3795878aa14999e04688399a9a305fa66151b732d0ee2f32cf9691 
	I0821 10:52:47.444124   97516 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a6ae141b3a3795878aa14999e04688399a9a305fa66151b732d0ee2f32cf9691 
	I0821 10:52:47.446004   97516 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1039-gcp\n", err: exit status 1
	I0821 10:52:47.446027   97516 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1039-gcp\n", err: exit status 1
	I0821 10:52:47.446131   97516 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0821 10:52:47.446144   97516 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0821 10:52:47.446162   97516 cni.go:84] Creating CNI manager for ""
	I0821 10:52:47.446178   97516 cni.go:136] 1 nodes found, recommending kindnet
	I0821 10:52:47.447874   97516 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0821 10:52:47.449201   97516 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0821 10:52:47.453115   97516 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0821 10:52:47.453139   97516 command_runner.go:130] >   Size: 3955775   	Blocks: 7728       IO Block: 4096   regular file
	I0821 10:52:47.453149   97516 command_runner.go:130] > Device: 34h/52d	Inode: 543804      Links: 1
	I0821 10:52:47.453164   97516 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0821 10:52:47.453177   97516 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I0821 10:52:47.453188   97516 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I0821 10:52:47.453199   97516 command_runner.go:130] > Change: 2023-08-21 10:33:51.094069544 +0000
	I0821 10:52:47.453207   97516 command_runner.go:130] >  Birth: 2023-08-21 10:33:51.070067242 +0000
	I0821 10:52:47.453257   97516 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.27.4/kubectl ...
	I0821 10:52:47.453274   97516 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0821 10:52:47.469227   97516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0821 10:52:48.193684   97516 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0821 10:52:48.200366   97516 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0821 10:52:48.207591   97516 command_runner.go:130] > serviceaccount/kindnet created
	I0821 10:52:48.215942   97516 command_runner.go:130] > daemonset.apps/kindnet created
	I0821 10:52:48.219540   97516 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0821 10:52:48.219626   97516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:52:48.219648   97516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43 minikube.k8s.io/name=multinode-200985 minikube.k8s.io/updated_at=2023_08_21T10_52_48_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:52:48.226206   97516 command_runner.go:130] > -16
	I0821 10:52:48.226232   97516 ops.go:34] apiserver oom_adj: -16
	I0821 10:52:48.290567   97516 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0821 10:52:48.290624   97516 command_runner.go:130] > node/multinode-200985 labeled
	I0821 10:52:48.290685   97516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:52:48.357138   97516 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 10:52:48.359712   97516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:52:48.450648   97516 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 10:52:48.951433   97516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:52:49.014161   97516 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 10:52:49.451778   97516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:52:49.512054   97516 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 10:52:49.950882   97516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:52:50.009261   97516 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 10:52:50.451103   97516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:52:50.513235   97516 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 10:52:50.950816   97516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:52:51.010071   97516 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 10:52:51.450978   97516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:52:51.512469   97516 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 10:52:51.951059   97516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:52:52.013371   97516 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 10:52:52.450894   97516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:52:52.509766   97516 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 10:52:52.951666   97516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:52:53.012273   97516 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 10:52:53.450827   97516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:52:53.510431   97516 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 10:52:53.951572   97516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:52:54.013874   97516 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 10:52:54.451452   97516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:52:54.511010   97516 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 10:52:54.950871   97516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:52:55.009439   97516 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 10:52:55.451615   97516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:52:55.513258   97516 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 10:52:55.950808   97516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:52:56.015302   97516 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 10:52:56.450868   97516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:52:56.513586   97516 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 10:52:56.950986   97516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:52:57.016135   97516 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 10:52:57.451748   97516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:52:57.519116   97516 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 10:52:57.951716   97516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:52:58.012096   97516 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 10:52:58.451517   97516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:52:58.512105   97516 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 10:52:58.950786   97516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:52:59.014384   97516 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 10:52:59.450940   97516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:52:59.511573   97516 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 10:52:59.951672   97516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:53:00.011511   97516 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 10:53:00.451785   97516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:53:00.516932   97516 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 10:53:00.951535   97516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:53:01.016686   97516 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 10:53:01.451029   97516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 10:53:01.572657   97516 command_runner.go:130] > NAME      SECRETS   AGE
	I0821 10:53:01.572678   97516 command_runner.go:130] > default   0         0s
	I0821 10:53:01.575152   97516 kubeadm.go:1081] duration metric: took 13.355586935s to wait for elevateKubeSystemPrivileges.
	I0821 10:53:01.575186   97516 kubeadm.go:406] StartCluster complete in 23.402781762s
	I0821 10:53:01.575205   97516 settings.go:142] acquiring lock: {Name:mkafc51d9ee0fb589973b887f0111ccc8fd1075b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 10:53:01.575281   97516 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17102-5717/kubeconfig
	I0821 10:53:01.576165   97516 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-5717/kubeconfig: {Name:mkb50cf560191d5f6ff2b436dd414f0b5471024e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 10:53:01.576458   97516 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0821 10:53:01.576548   97516 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0821 10:53:01.576661   97516 addons.go:69] Setting storage-provisioner=true in profile "multinode-200985"
	I0821 10:53:01.576690   97516 addons.go:69] Setting default-storageclass=true in profile "multinode-200985"
	I0821 10:53:01.576748   97516 addons.go:231] Setting addon storage-provisioner=true in "multinode-200985"
	I0821 10:53:01.576757   97516 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-200985"
	I0821 10:53:01.576676   97516 config.go:182] Loaded profile config "multinode-200985": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0821 10:53:01.576812   97516 host.go:66] Checking if "multinode-200985" exists ...
	I0821 10:53:01.576791   97516 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17102-5717/kubeconfig
	I0821 10:53:01.577121   97516 cli_runner.go:164] Run: docker container inspect multinode-200985 --format={{.State.Status}}
	I0821 10:53:01.577263   97516 kapi.go:59] client config for multinode-200985: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/client.crt", KeyFile:"/home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/client.key", CAFile:"/home/jenkins/minikube-integration/17102-5717/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d61e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0821 10:53:01.577390   97516 cli_runner.go:164] Run: docker container inspect multinode-200985 --format={{.State.Status}}
	I0821 10:53:01.578226   97516 cert_rotation.go:137] Starting client certificate rotation controller
	I0821 10:53:01.578542   97516 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0821 10:53:01.578563   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:01.578574   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:01.578585   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:01.597030   97516 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0821 10:53:01.596566   97516 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17102-5717/kubeconfig
	I0821 10:53:01.598695   97516 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0821 10:53:01.598708   97516 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0821 10:53:01.598807   97516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-200985
	I0821 10:53:01.599050   97516 kapi.go:59] client config for multinode-200985: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/client.crt", KeyFile:"/home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/client.key", CAFile:"/home/jenkins/minikube-integration/17102-5717/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d61e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0821 10:53:01.600136   97516 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0821 10:53:01.600165   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:01.600177   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:01.600187   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:01.614773   97516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/multinode-200985/id_rsa Username:docker}
	I0821 10:53:01.638729   97516 round_trippers.go:574] Response Status: 200 OK in 38 milliseconds
	I0821 10:53:01.638757   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:01.638768   97516 round_trippers.go:580]     Audit-Id: ac1605ed-7e74-449a-a9be-e1eec7a11f5b
	I0821 10:53:01.638776   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:01.638784   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:01.638791   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:01.638799   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:01.638806   97516 round_trippers.go:580]     Content-Length: 109
	I0821 10:53:01.638813   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:01 GMT
	I0821 10:53:01.638849   97516 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"350"},"items":[]}
	I0821 10:53:01.639136   97516 round_trippers.go:574] Response Status: 200 OK in 60 milliseconds
	I0821 10:53:01.639161   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:01.639171   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:01.639186   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:01.639199   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:01.639208   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:01.639219   97516 round_trippers.go:580]     Content-Length: 291
	I0821 10:53:01.639232   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:01 GMT
	I0821 10:53:01.639242   97516 round_trippers.go:580]     Audit-Id: 88a80dc0-1f3e-4ad2-9135-4b86e5c06220
	I0821 10:53:01.639269   97516 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a051514f-f439-40a8-b010-41566895539b","resourceVersion":"344","creationTimestamp":"2023-08-21T10:52:47Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0821 10:53:01.639275   97516 addons.go:231] Setting addon default-storageclass=true in "multinode-200985"
	I0821 10:53:01.639318   97516 host.go:66] Checking if "multinode-200985" exists ...
	I0821 10:53:01.639700   97516 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a051514f-f439-40a8-b010-41566895539b","resourceVersion":"344","creationTimestamp":"2023-08-21T10:52:47Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0821 10:53:01.639765   97516 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0821 10:53:01.639782   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:01.639793   97516 round_trippers.go:473]     Content-Type: application/json
	I0821 10:53:01.639807   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:01.639818   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:01.639857   97516 cli_runner.go:164] Run: docker container inspect multinode-200985 --format={{.State.Status}}
	I0821 10:53:01.646622   97516 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0821 10:53:01.646649   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:01.646661   97516 round_trippers.go:580]     Content-Length: 291
	I0821 10:53:01.646669   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:01 GMT
	I0821 10:53:01.646676   97516 round_trippers.go:580]     Audit-Id: 6dc26bfd-3f4f-4017-97ea-f7aecde1d8eb
	I0821 10:53:01.646684   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:01.646691   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:01.646699   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:01.646707   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:01.646742   97516 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a051514f-f439-40a8-b010-41566895539b","resourceVersion":"351","creationTimestamp":"2023-08-21T10:52:47Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0821 10:53:01.646911   97516 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0821 10:53:01.646929   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:01.646938   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:01.646945   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:01.649500   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:01.649520   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:01.649530   97516 round_trippers.go:580]     Audit-Id: b458e4b0-f1bf-4389-a832-c82bee3655c6
	I0821 10:53:01.649540   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:01.649548   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:01.649557   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:01.649570   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:01.649579   97516 round_trippers.go:580]     Content-Length: 291
	I0821 10:53:01.649588   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:01 GMT
	I0821 10:53:01.649613   97516 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a051514f-f439-40a8-b010-41566895539b","resourceVersion":"351","creationTimestamp":"2023-08-21T10:52:47Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0821 10:53:01.649716   97516 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-200985" context rescaled to 1 replicas
	I0821 10:53:01.649754   97516 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0821 10:53:01.651538   97516 out.go:177] * Verifying Kubernetes components...
	I0821 10:53:01.653045   97516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0821 10:53:01.663026   97516 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0821 10:53:01.663051   97516 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0821 10:53:01.663108   97516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-200985
	I0821 10:53:01.685153   97516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/multinode-200985/id_rsa Username:docker}
	I0821 10:53:01.774863   97516 command_runner.go:130] > apiVersion: v1
	I0821 10:53:01.774940   97516 command_runner.go:130] > data:
	I0821 10:53:01.774946   97516 command_runner.go:130] >   Corefile: |
	I0821 10:53:01.774949   97516 command_runner.go:130] >     .:53 {
	I0821 10:53:01.774953   97516 command_runner.go:130] >         errors
	I0821 10:53:01.774958   97516 command_runner.go:130] >         health {
	I0821 10:53:01.774962   97516 command_runner.go:130] >            lameduck 5s
	I0821 10:53:01.774966   97516 command_runner.go:130] >         }
	I0821 10:53:01.774970   97516 command_runner.go:130] >         ready
	I0821 10:53:01.774976   97516 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0821 10:53:01.774981   97516 command_runner.go:130] >            pods insecure
	I0821 10:53:01.774986   97516 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0821 10:53:01.775009   97516 command_runner.go:130] >            ttl 30
	I0821 10:53:01.775013   97516 command_runner.go:130] >         }
	I0821 10:53:01.775018   97516 command_runner.go:130] >         prometheus :9153
	I0821 10:53:01.775025   97516 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0821 10:53:01.775030   97516 command_runner.go:130] >            max_concurrent 1000
	I0821 10:53:01.775036   97516 command_runner.go:130] >         }
	I0821 10:53:01.775040   97516 command_runner.go:130] >         cache 30
	I0821 10:53:01.775046   97516 command_runner.go:130] >         loop
	I0821 10:53:01.775050   97516 command_runner.go:130] >         reload
	I0821 10:53:01.775056   97516 command_runner.go:130] >         loadbalance
	I0821 10:53:01.775060   97516 command_runner.go:130] >     }
	I0821 10:53:01.775066   97516 command_runner.go:130] > kind: ConfigMap
	I0821 10:53:01.775070   97516 command_runner.go:130] > metadata:
	I0821 10:53:01.775076   97516 command_runner.go:130] >   creationTimestamp: "2023-08-21T10:52:47Z"
	I0821 10:53:01.775084   97516 command_runner.go:130] >   name: coredns
	I0821 10:53:01.775087   97516 command_runner.go:130] >   namespace: kube-system
	I0821 10:53:01.775091   97516 command_runner.go:130] >   resourceVersion: "218"
	I0821 10:53:01.775096   97516 command_runner.go:130] >   uid: 18a8a4de-dd11-4a4a-98d1-98aa1033dff6
	I0821 10:53:01.777478   97516 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0821 10:53:01.777755   97516 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17102-5717/kubeconfig
	I0821 10:53:01.777989   97516 kapi.go:59] client config for multinode-200985: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/client.crt", KeyFile:"/home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/client.key", CAFile:"/home/jenkins/minikube-integration/17102-5717/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d61e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0821 10:53:01.778240   97516 node_ready.go:35] waiting up to 6m0s for node "multinode-200985" to be "Ready" ...
	I0821 10:53:01.778303   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:01.778318   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:01.778330   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:01.778339   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:01.780411   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:01.780434   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:01.780444   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:01.780454   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:01.780469   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:01.780477   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:01.780485   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:01 GMT
	I0821 10:53:01.780498   97516 round_trippers.go:580]     Audit-Id: 279b28ff-d125-4a3b-8ec8-db8b2b6dc81c
	I0821 10:53:01.780606   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:01.781394   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:01.781405   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:01.781415   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:01.781425   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:01.783484   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:01.783498   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:01.783505   97516 round_trippers.go:580]     Audit-Id: 43fea227-51a4-479b-89b4-8171367d64de
	I0821 10:53:01.783511   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:01.783516   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:01.783522   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:01.783527   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:01.783532   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:01 GMT
	I0821 10:53:01.783672   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:01.851338   97516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0821 10:53:01.856692   97516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0821 10:53:02.284207   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:02.284226   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:02.284234   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:02.284240   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:02.338272   97516 round_trippers.go:574] Response Status: 200 OK in 54 milliseconds
	I0821 10:53:02.338350   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:02.338372   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:02.338390   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:02.338429   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:02.338447   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:02.338478   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:02 GMT
	I0821 10:53:02.338510   97516 round_trippers.go:580]     Audit-Id: 13e41b1a-e847-4023-b0c7-2e9e0125fb65
	I0821 10:53:02.339106   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:02.457476   97516 command_runner.go:130] > configmap/coredns replaced
	I0821 10:53:02.463547   97516 start.go:901] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0821 10:53:02.687463   97516 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0821 10:53:02.687496   97516 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0821 10:53:02.687508   97516 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0821 10:53:02.687519   97516 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0821 10:53:02.687526   97516 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0821 10:53:02.687533   97516 command_runner.go:130] > pod/storage-provisioner created
	I0821 10:53:02.687588   97516 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0821 10:53:02.690039   97516 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0821 10:53:02.691426   97516 addons.go:502] enable addons completed in 1.114876877s: enabled=[default-storageclass storage-provisioner]
	I0821 10:53:02.784721   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:02.784742   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:02.784749   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:02.784772   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:02.787161   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:02.787177   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:02.787184   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:02.787189   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:02.787197   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:02.787207   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:02.787216   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:02 GMT
	I0821 10:53:02.787232   97516 round_trippers.go:580]     Audit-Id: fd017886-a437-4deb-9b95-3d7fa97e195f
	I0821 10:53:02.787482   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:03.284582   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:03.284606   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:03.284617   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:03.284625   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:03.286954   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:03.286980   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:03.286990   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:03.286998   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:03.287004   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:03.287009   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:03.287014   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:03 GMT
	I0821 10:53:03.287024   97516 round_trippers.go:580]     Audit-Id: 864f43d8-ffd9-432e-866e-9ca59a45168a
	I0821 10:53:03.287128   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:03.784887   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:03.784911   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:03.784922   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:03.784930   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:03.787726   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:03.787743   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:03.787751   97516 round_trippers.go:580]     Audit-Id: 2d6ed5b6-9229-4718-a32f-186658c16348
	I0821 10:53:03.787756   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:03.787762   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:03.787770   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:03.787778   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:03.787796   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:03 GMT
	I0821 10:53:03.787940   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:03.788270   97516 node_ready.go:58] node "multinode-200985" has status "Ready":"False"
	I0821 10:53:04.284489   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:04.284509   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:04.284517   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:04.284524   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:04.286995   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:04.287018   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:04.287028   97516 round_trippers.go:580]     Audit-Id: 87a75c81-d09d-412a-b068-01e746f1a2bd
	I0821 10:53:04.287036   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:04.287045   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:04.287056   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:04.287069   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:04.287079   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:04 GMT
	I0821 10:53:04.287212   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:04.784590   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:04.784609   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:04.784616   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:04.784625   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:04.787107   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:04.787128   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:04.787138   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:04.787146   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:04.787156   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:04.787169   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:04 GMT
	I0821 10:53:04.787182   97516 round_trippers.go:580]     Audit-Id: a8dcff48-5a28-44be-8ada-cd5380dddccf
	I0821 10:53:04.787193   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:04.787343   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:05.284746   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:05.284765   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:05.284773   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:05.284779   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:05.287134   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:05.287157   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:05.287166   97516 round_trippers.go:580]     Audit-Id: ada18fe9-d657-4ad6-b1fa-a2bfbc3a4317
	I0821 10:53:05.287174   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:05.287182   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:05.287198   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:05.287206   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:05.287213   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:05 GMT
	I0821 10:53:05.287345   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:05.784584   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:05.784604   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:05.784612   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:05.784618   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:05.786751   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:05.786770   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:05.786776   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:05.786785   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:05.786794   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:05.786803   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:05 GMT
	I0821 10:53:05.786812   97516 round_trippers.go:580]     Audit-Id: 9321e941-f325-42eb-8c96-6b857723846e
	I0821 10:53:05.786827   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:05.786995   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:06.284493   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:06.284514   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:06.284527   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:06.284534   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:06.286964   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:06.286988   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:06.286998   97516 round_trippers.go:580]     Audit-Id: 055fa6dd-9021-4ee7-8b00-ade577d638e8
	I0821 10:53:06.287008   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:06.287017   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:06.287025   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:06.287037   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:06.287049   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:06 GMT
	I0821 10:53:06.287164   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:06.287614   97516 node_ready.go:58] node "multinode-200985" has status "Ready":"False"
	I0821 10:53:06.784578   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:06.784598   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:06.784605   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:06.784612   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:06.786948   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:06.786971   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:06.786982   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:06.786991   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:06.787000   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:06.787010   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:06.787018   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:06 GMT
	I0821 10:53:06.787024   97516 round_trippers.go:580]     Audit-Id: 1600c716-6194-4d7a-8cb5-67dea46b84b6
	I0821 10:53:06.787160   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:07.284588   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:07.284609   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:07.284618   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:07.284624   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:07.287063   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:07.287085   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:07.287094   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:07.287102   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:07.287109   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:07.287120   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:07.287133   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:07 GMT
	I0821 10:53:07.287142   97516 round_trippers.go:580]     Audit-Id: 12ebd163-6ed7-4c52-86f8-6dfa1e86f0f5
	I0821 10:53:07.287251   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:07.784594   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:07.784620   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:07.784632   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:07.784640   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:07.786942   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:07.786966   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:07.786973   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:07.786983   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:07.786992   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:07.787002   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:07 GMT
	I0821 10:53:07.787011   97516 round_trippers.go:580]     Audit-Id: 060dfa6b-5f10-4919-84ec-8a885a8f0e64
	I0821 10:53:07.787024   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:07.787138   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:08.284567   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:08.284585   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:08.284593   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:08.284599   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:08.286807   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:08.286829   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:08.286837   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:08.286842   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:08.286848   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:08.286853   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:08.286859   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:08 GMT
	I0821 10:53:08.286864   97516 round_trippers.go:580]     Audit-Id: ee001520-edd1-4f85-bc36-a452f8403333
	I0821 10:53:08.287032   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:08.785016   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:08.785037   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:08.785045   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:08.785051   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:08.787164   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:08.787181   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:08.787188   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:08.787193   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:08.787199   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:08.787204   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:08 GMT
	I0821 10:53:08.787209   97516 round_trippers.go:580]     Audit-Id: 3dc05d22-e18b-431b-94f8-83a6cb9a88fb
	I0821 10:53:08.787218   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:08.787400   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:08.787747   97516 node_ready.go:58] node "multinode-200985" has status "Ready":"False"
	I0821 10:53:09.285105   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:09.285125   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:09.285133   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:09.285139   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:09.287285   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:09.287301   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:09.287308   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:09.287313   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:09.287319   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:09.287328   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:09.287334   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:09 GMT
	I0821 10:53:09.287340   97516 round_trippers.go:580]     Audit-Id: 91707dcc-d78a-4e75-bd00-bfbd7b932f49
	I0821 10:53:09.287496   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:09.785177   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:09.785209   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:09.785221   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:09.785231   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:09.787426   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:09.787454   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:09.787465   97516 round_trippers.go:580]     Audit-Id: 1805d100-f47b-4178-9204-4b5fa02fb65a
	I0821 10:53:09.787474   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:09.787483   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:09.787490   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:09.787496   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:09.787502   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:09 GMT
	I0821 10:53:09.787605   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:10.284155   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:10.284180   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:10.284193   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:10.284203   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:10.286517   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:10.286541   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:10.286551   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:10.286559   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:10.286568   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:10.286577   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:10 GMT
	I0821 10:53:10.286587   97516 round_trippers.go:580]     Audit-Id: 4d89d444-bc3c-4560-b7f8-119683396765
	I0821 10:53:10.286598   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:10.286712   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:10.784330   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:10.784356   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:10.784369   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:10.784379   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:10.786513   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:10.786530   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:10.786537   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:10.786543   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:10.786549   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:10 GMT
	I0821 10:53:10.786556   97516 round_trippers.go:580]     Audit-Id: ca43178c-c552-4265-8c77-56b01222ca11
	I0821 10:53:10.786561   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:10.786566   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:10.786674   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:11.284215   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:11.284236   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:11.284243   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:11.284249   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:11.286462   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:11.286482   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:11.286492   97516 round_trippers.go:580]     Audit-Id: 7ac91fa7-7f10-40c8-a49f-2bff8e87dbe1
	I0821 10:53:11.286499   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:11.286507   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:11.286515   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:11.286524   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:11.286534   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:11 GMT
	I0821 10:53:11.286694   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:11.287010   97516 node_ready.go:58] node "multinode-200985" has status "Ready":"False"
	I0821 10:53:11.784489   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:11.784511   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:11.784519   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:11.784525   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:11.786814   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:11.786831   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:11.786839   97516 round_trippers.go:580]     Audit-Id: 9a61b457-28b7-4b91-aac1-75c2e86c70ea
	I0821 10:53:11.786844   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:11.786850   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:11.786855   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:11.786861   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:11.786870   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:11 GMT
	I0821 10:53:11.787035   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:12.284570   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:12.284592   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:12.284621   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:12.284631   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:12.286910   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:12.286930   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:12.286938   97516 round_trippers.go:580]     Audit-Id: 607f2092-29e8-4081-95c9-7f2a8ecc54bb
	I0821 10:53:12.286943   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:12.286949   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:12.286954   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:12.286960   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:12.286965   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:12 GMT
	I0821 10:53:12.287085   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:12.784627   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:12.784650   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:12.784658   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:12.784664   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:12.787045   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:12.787071   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:12.787086   97516 round_trippers.go:580]     Audit-Id: 6c7e501f-2cee-4faf-80df-98934150a0fb
	I0821 10:53:12.787096   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:12.787106   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:12.787117   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:12.787130   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:12.787141   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:12 GMT
	I0821 10:53:12.787334   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:13.284909   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:13.284929   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:13.284941   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:13.284950   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:13.287278   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:13.287297   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:13.287303   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:13.287309   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:13.287314   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:13.287320   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:13 GMT
	I0821 10:53:13.287328   97516 round_trippers.go:580]     Audit-Id: fa078d8a-3eb5-403e-958d-3bc69d4e02a0
	I0821 10:53:13.287333   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:13.287469   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:13.287767   97516 node_ready.go:58] node "multinode-200985" has status "Ready":"False"
	I0821 10:53:13.784240   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:13.784259   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:13.784267   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:13.784274   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:13.786583   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:13.786608   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:13.786616   97516 round_trippers.go:580]     Audit-Id: 6ef12150-bc7a-421d-acd9-4466e84ba6e9
	I0821 10:53:13.786624   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:13.786633   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:13.786643   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:13.786650   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:13.786658   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:13 GMT
	I0821 10:53:13.786788   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:14.284298   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:14.284320   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:14.284328   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:14.284334   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:14.286852   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:14.286875   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:14.286886   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:14 GMT
	I0821 10:53:14.286896   97516 round_trippers.go:580]     Audit-Id: 6510e21d-ce0f-4274-92a0-90665685fabb
	I0821 10:53:14.286904   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:14.286915   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:14.286927   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:14.286939   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:14.287062   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:14.784587   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:14.784607   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:14.784615   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:14.784622   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:14.786968   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:14.786992   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:14.787002   97516 round_trippers.go:580]     Audit-Id: 8d3baa2f-a236-4321-aa03-2980273a144e
	I0821 10:53:14.787011   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:14.787023   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:14.787035   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:14.787050   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:14.787062   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:14 GMT
	I0821 10:53:14.787202   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:15.284573   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:15.284603   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:15.284611   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:15.284617   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:15.286824   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:15.286842   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:15.286852   97516 round_trippers.go:580]     Audit-Id: c7e7eeab-c080-4f65-b3f0-07132a904b68
	I0821 10:53:15.286862   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:15.286874   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:15.286886   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:15.286894   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:15.286903   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:15 GMT
	I0821 10:53:15.287024   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:15.784223   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:15.784247   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:15.784257   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:15.784269   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:15.786439   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:15.786465   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:15.786476   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:15 GMT
	I0821 10:53:15.786485   97516 round_trippers.go:580]     Audit-Id: 29eb6390-1156-4b35-83af-f73fdc4e3d65
	I0821 10:53:15.786494   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:15.786505   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:15.786517   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:15.786526   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:15.786652   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:15.786978   97516 node_ready.go:58] node "multinode-200985" has status "Ready":"False"
	I0821 10:53:16.284174   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:16.284196   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:16.284207   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:16.284217   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:16.286563   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:16.286581   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:16.286587   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:16.286593   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:16 GMT
	I0821 10:53:16.286599   97516 round_trippers.go:580]     Audit-Id: ba70f2c3-550b-46cc-bb81-8d3a4da5c9d2
	I0821 10:53:16.286604   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:16.286610   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:16.286616   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:16.286719   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:16.784275   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:16.784297   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:16.784308   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:16.784316   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:16.786713   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:16.786735   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:16.786746   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:16 GMT
	I0821 10:53:16.786753   97516 round_trippers.go:580]     Audit-Id: 0602d761-d58d-464d-87ca-991f57c2f0bd
	I0821 10:53:16.786761   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:16.786770   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:16.786779   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:16.786791   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:16.786920   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:17.284384   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:17.284411   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:17.284424   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:17.284435   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:17.286770   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:17.286791   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:17.286801   97516 round_trippers.go:580]     Audit-Id: 3c4fdbfe-9425-4caf-8df4-9f3747b23883
	I0821 10:53:17.286808   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:17.286815   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:17.286824   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:17.286831   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:17.286840   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:17 GMT
	I0821 10:53:17.286935   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:17.784534   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:17.784561   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:17.784573   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:17.784589   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:17.787156   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:17.787183   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:17.787190   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:17.787195   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:17.787201   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:17.787207   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:17.787213   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:17 GMT
	I0821 10:53:17.787225   97516 round_trippers.go:580]     Audit-Id: d4035212-baa9-4f15-af52-73e3eb7e2f29
	I0821 10:53:17.787392   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:17.787792   97516 node_ready.go:58] node "multinode-200985" has status "Ready":"False"
	I0821 10:53:18.284886   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:18.284904   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:18.284912   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:18.284919   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:18.287328   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:18.287349   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:18.287379   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:18.287387   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:18 GMT
	I0821 10:53:18.287399   97516 round_trippers.go:580]     Audit-Id: 5e4e8d45-7b17-4bc6-8e52-66cafd8bb26a
	I0821 10:53:18.287408   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:18.287419   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:18.287424   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:18.287536   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:18.784355   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:18.784377   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:18.784385   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:18.784392   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:18.786749   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:18.786771   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:18.786778   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:18 GMT
	I0821 10:53:18.786784   97516 round_trippers.go:580]     Audit-Id: f1304d3c-bfb5-496b-a480-04e8d33c8248
	I0821 10:53:18.786792   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:18.786801   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:18.786813   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:18.786826   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:18.786992   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:19.284355   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:19.284372   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:19.284380   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:19.284386   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:19.286589   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:19.286614   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:19.286624   97516 round_trippers.go:580]     Audit-Id: dbd07b81-2fda-4a69-9992-bc0ff66e6ca2
	I0821 10:53:19.286634   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:19.286643   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:19.286651   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:19.286658   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:19.286668   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:19 GMT
	I0821 10:53:19.286779   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:19.784263   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:19.784282   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:19.784290   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:19.784296   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:19.786621   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:19.786641   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:19.786648   97516 round_trippers.go:580]     Audit-Id: f9f9953c-0fe0-4c1f-b537-d192ed1a5cdb
	I0821 10:53:19.786654   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:19.786659   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:19.786665   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:19.786671   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:19.786676   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:19 GMT
	I0821 10:53:19.786781   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:20.284266   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:20.284287   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:20.284295   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:20.284301   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:20.286793   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:20.286812   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:20.286820   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:20.286826   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:20.286833   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:20.286842   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:20 GMT
	I0821 10:53:20.286852   97516 round_trippers.go:580]     Audit-Id: b6d72084-9359-4439-83c6-c53fa83ec0c7
	I0821 10:53:20.286861   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:20.286983   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:20.287293   97516 node_ready.go:58] node "multinode-200985" has status "Ready":"False"
	I0821 10:53:20.784206   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:20.784228   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:20.784238   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:20.784244   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:20.786475   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:20.786498   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:20.786507   97516 round_trippers.go:580]     Audit-Id: f2adff33-dc55-429d-9f0f-b22e21a2ff3f
	I0821 10:53:20.786514   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:20.786522   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:20.786531   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:20.786542   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:20.786559   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:20 GMT
	I0821 10:53:20.786675   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:21.284215   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:21.284234   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:21.284242   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:21.284248   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:21.286420   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:21.286446   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:21.286456   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:21.286466   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:21 GMT
	I0821 10:53:21.286475   97516 round_trippers.go:580]     Audit-Id: 40f5de8c-8305-4cda-8179-80e1466d9fb0
	I0821 10:53:21.286484   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:21.286496   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:21.286509   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:21.286613   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:21.784316   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:21.784335   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:21.784343   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:21.784349   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:21.786778   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:21.786804   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:21.786815   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:21.786823   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:21.786828   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:21.786835   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:21.786840   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:21 GMT
	I0821 10:53:21.786846   97516 round_trippers.go:580]     Audit-Id: 72cd4d84-83bc-4f89-8ab6-1463e1228857
	I0821 10:53:21.786973   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:22.284206   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:22.284237   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:22.284245   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:22.284252   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:22.286459   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:22.286476   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:22.286483   97516 round_trippers.go:580]     Audit-Id: c2bb2988-db7b-420e-9d81-1fb0e41153e3
	I0821 10:53:22.286490   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:22.286505   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:22.286514   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:22.286523   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:22.286533   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:22 GMT
	I0821 10:53:22.286619   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:22.784137   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:22.784175   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:22.784185   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:22.784193   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:22.786488   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:22.786509   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:22.786518   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:22.786526   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:22.786535   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:22 GMT
	I0821 10:53:22.786545   97516 round_trippers.go:580]     Audit-Id: dd0e1048-65ab-4759-a6cf-a4e5c0d2648d
	I0821 10:53:22.786555   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:22.786571   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:22.786703   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:22.787035   97516 node_ready.go:58] node "multinode-200985" has status "Ready":"False"
	I0821 10:53:23.284221   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:23.284240   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:23.284248   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:23.284254   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:23.286739   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:23.286761   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:23.286768   97516 round_trippers.go:580]     Audit-Id: 1497605d-d693-4bdc-a17a-6f714c590642
	I0821 10:53:23.286773   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:23.286778   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:23.286783   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:23.286788   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:23.286794   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:23 GMT
	I0821 10:53:23.286925   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:23.784564   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:23.784585   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:23.784593   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:23.784600   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:23.786925   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:23.786943   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:23.786950   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:23.786956   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:23 GMT
	I0821 10:53:23.786961   97516 round_trippers.go:580]     Audit-Id: c9fbb956-e0ea-4ec4-9149-3e0ba229040f
	I0821 10:53:23.786975   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:23.786986   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:23.786998   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:23.787116   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:24.284576   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:24.284596   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:24.284604   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:24.284611   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:24.287203   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:24.287230   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:24.287241   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:24.287251   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:24.287260   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:24.287272   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:24 GMT
	I0821 10:53:24.287285   97516 round_trippers.go:580]     Audit-Id: 66da2e7e-7464-475e-9093-fe431edc0684
	I0821 10:53:24.287294   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:24.287427   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:24.784576   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:24.784596   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:24.784604   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:24.784610   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:24.786898   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:24.786930   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:24.786937   97516 round_trippers.go:580]     Audit-Id: adea1577-2a8d-422f-8125-c922aab9e31c
	I0821 10:53:24.786943   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:24.786948   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:24.786954   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:24.786960   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:24.786969   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:24 GMT
	I0821 10:53:24.787074   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:24.787396   97516 node_ready.go:58] node "multinode-200985" has status "Ready":"False"
	I0821 10:53:25.284547   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:25.284568   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:25.284576   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:25.284582   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:25.286923   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:25.286940   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:25.286947   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:25.286953   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:25.286960   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:25 GMT
	I0821 10:53:25.286969   97516 round_trippers.go:580]     Audit-Id: 6d6fbefd-b388-4e9d-b353-481652914195
	I0821 10:53:25.286979   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:25.286996   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:25.287133   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:25.784566   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:25.784585   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:25.784592   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:25.784598   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:25.787039   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:25.787056   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:25.787062   97516 round_trippers.go:580]     Audit-Id: ae482ba9-5853-4a21-aa50-f4543861eacf
	I0821 10:53:25.787068   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:25.787073   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:25.787078   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:25.787084   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:25.787090   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:25 GMT
	I0821 10:53:25.787251   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:26.284568   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:26.284591   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:26.284602   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:26.284610   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:26.287133   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:26.287154   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:26.287164   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:26.287173   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:26.287183   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:26.287196   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:26.287205   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:26 GMT
	I0821 10:53:26.287217   97516 round_trippers.go:580]     Audit-Id: 42f0e2d0-04a3-445e-9e11-93275ad8f7ad
	I0821 10:53:26.287326   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:26.784998   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:26.785023   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:26.785035   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:26.785045   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:26.787431   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:26.787454   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:26.787466   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:26 GMT
	I0821 10:53:26.787475   97516 round_trippers.go:580]     Audit-Id: bfdc7c3e-e457-4929-830a-03d0bee9a4a5
	I0821 10:53:26.787482   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:26.787491   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:26.787505   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:26.787514   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:26.787642   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:26.788074   97516 node_ready.go:58] node "multinode-200985" has status "Ready":"False"
	I0821 10:53:27.284168   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:27.284201   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:27.284209   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:27.284215   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:27.286488   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:27.286506   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:27.286513   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:27 GMT
	I0821 10:53:27.286518   97516 round_trippers.go:580]     Audit-Id: 18b2bfba-7ee8-4c21-a88d-11babe672eaa
	I0821 10:53:27.286527   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:27.286536   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:27.286545   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:27.286554   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:27.286688   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:27.784270   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:27.784293   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:27.784307   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:27.784315   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:27.786836   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:27.786860   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:27.786871   97516 round_trippers.go:580]     Audit-Id: 876df5ad-a38c-483c-ab21-2bdb1d7960ce
	I0821 10:53:27.786880   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:27.786889   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:27.786898   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:27.786907   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:27.786916   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:27 GMT
	I0821 10:53:27.787049   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:28.284590   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:28.284608   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:28.284616   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:28.284622   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:28.286915   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:28.286936   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:28.286946   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:28.286954   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:28 GMT
	I0821 10:53:28.286963   97516 round_trippers.go:580]     Audit-Id: c58481f2-2882-4951-9d67-9b54d7984171
	I0821 10:53:28.286971   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:28.286986   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:28.286994   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:28.287086   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:28.784119   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:28.784138   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:28.784146   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:28.784152   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:28.786532   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:28.786556   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:28.786566   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:28.786575   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:28.786584   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:28.786594   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:28 GMT
	I0821 10:53:28.786608   97516 round_trippers.go:580]     Audit-Id: c17765c5-b25f-48b6-8106-6eaf67c6c024
	I0821 10:53:28.786617   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:28.786717   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:29.284278   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:29.284301   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:29.284309   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:29.284315   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:29.286632   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:29.286654   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:29.286662   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:29.286668   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:29.286674   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:29.286679   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:29 GMT
	I0821 10:53:29.286686   97516 round_trippers.go:580]     Audit-Id: 8733cac6-0ed4-4416-931f-38194295c70d
	I0821 10:53:29.286692   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:29.286774   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:29.287095   97516 node_ready.go:58] node "multinode-200985" has status "Ready":"False"
	I0821 10:53:29.784312   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:29.784332   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:29.784340   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:29.784348   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:29.786575   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:29.786595   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:29.786605   97516 round_trippers.go:580]     Audit-Id: d5b2e687-7caa-47ea-8e8d-7a6adb2d42e8
	I0821 10:53:29.786614   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:29.786623   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:29.786636   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:29.786645   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:29.786661   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:29 GMT
	I0821 10:53:29.786790   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:30.284289   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:30.284309   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:30.284317   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:30.284323   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:30.286921   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:30.286941   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:30.286948   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:30.286954   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:30.286964   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:30.286970   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:30.286980   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:30 GMT
	I0821 10:53:30.286985   97516 round_trippers.go:580]     Audit-Id: c2f2ed6e-6209-4678-98f3-1876712da15c
	I0821 10:53:30.287075   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:30.784768   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:30.784787   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:30.784795   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:30.784802   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:30.787132   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:30.787151   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:30.787158   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:30.787165   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:30.787170   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:30 GMT
	I0821 10:53:30.787176   97516 round_trippers.go:580]     Audit-Id: dd1dc6ad-08e0-4815-9c30-3ecb58f9cc55
	I0821 10:53:30.787181   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:30.787187   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:30.787321   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:31.284571   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:31.284589   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:31.284597   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:31.284604   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:31.287118   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:31.287139   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:31.287148   97516 round_trippers.go:580]     Audit-Id: cc7675c2-fcb5-4918-868e-19c7933bd9af
	I0821 10:53:31.287155   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:31.287164   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:31.287172   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:31.287180   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:31.287192   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:31 GMT
	I0821 10:53:31.287349   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:31.287688   97516 node_ready.go:58] node "multinode-200985" has status "Ready":"False"
	I0821 10:53:31.785115   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:31.785135   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:31.785143   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:31.785149   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:31.787463   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:31.787480   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:31.787487   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:31.787492   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:31.787501   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:31.787510   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:31 GMT
	I0821 10:53:31.787518   97516 round_trippers.go:580]     Audit-Id: 9f417acc-bd3f-48b7-875d-ad2acd533342
	I0821 10:53:31.787526   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:31.787635   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"298","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0821 10:53:32.284200   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:32.284222   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:32.284230   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:32.284236   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:32.286396   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:32.286415   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:32.286421   97516 round_trippers.go:580]     Audit-Id: cf9e3bfb-4ccc-41f0-b5b7-1f9288d797c7
	I0821 10:53:32.286430   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:32.286439   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:32.286448   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:32.286457   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:32.286467   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:32 GMT
	I0821 10:53:32.286554   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"388","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0821 10:53:32.286843   97516 node_ready.go:49] node "multinode-200985" has status "Ready":"True"
	I0821 10:53:32.286856   97516 node_ready.go:38] duration metric: took 30.508603751s waiting for node "multinode-200985" to be "Ready" ...
	I0821 10:53:32.286865   97516 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0821 10:53:32.286941   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0821 10:53:32.286948   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:32.286955   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:32.286960   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:32.290282   97516 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0821 10:53:32.290306   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:32.290317   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:32 GMT
	I0821 10:53:32.290326   97516 round_trippers.go:580]     Audit-Id: f254a55f-6263-484d-91e0-2c8e66a93e4d
	I0821 10:53:32.290336   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:32.290349   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:32.290362   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:32.290374   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:32.290796   97516 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"394"},"items":[{"metadata":{"name":"coredns-5d78c9869d-p7wfm","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"e31fc5d6-efb4-4659-95e0-45e4b0319116","resourceVersion":"394","creationTimestamp":"2023-08-21T10:53:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"e0b7f7a8-559b-44c2-879c-54813abddce8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e0b7f7a8-559b-44c2-879c-54813abddce8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55535 chars]
	I0821 10:53:32.293755   97516 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-p7wfm" in "kube-system" namespace to be "Ready" ...
	I0821 10:53:32.293821   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-p7wfm
	I0821 10:53:32.293830   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:32.293837   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:32.293843   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:32.295908   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:32.295927   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:32.295938   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:32.295948   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:32 GMT
	I0821 10:53:32.295957   97516 round_trippers.go:580]     Audit-Id: 0d535fed-6da9-4881-8a06-0e6a6410feda
	I0821 10:53:32.295963   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:32.295970   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:32.295976   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:32.296071   97516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-p7wfm","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"e31fc5d6-efb4-4659-95e0-45e4b0319116","resourceVersion":"394","creationTimestamp":"2023-08-21T10:53:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"e0b7f7a8-559b-44c2-879c-54813abddce8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e0b7f7a8-559b-44c2-879c-54813abddce8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0821 10:53:32.296558   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:32.296572   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:32.296583   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:32.296591   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:32.298238   97516 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0821 10:53:32.298251   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:32.298258   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:32.298264   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:32.298270   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:32.298278   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:32 GMT
	I0821 10:53:32.298287   97516 round_trippers.go:580]     Audit-Id: 03c3a289-e297-4329-9452-cff1e212595f
	I0821 10:53:32.298305   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:32.298429   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"388","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0821 10:53:32.298723   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-p7wfm
	I0821 10:53:32.298734   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:32.298741   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:32.298747   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:32.300417   97516 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0821 10:53:32.300431   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:32.300437   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:32 GMT
	I0821 10:53:32.300442   97516 round_trippers.go:580]     Audit-Id: 781c44d9-42be-4ead-bcca-9630f615f085
	I0821 10:53:32.300450   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:32.300458   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:32.300467   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:32.300487   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:32.300636   97516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-p7wfm","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"e31fc5d6-efb4-4659-95e0-45e4b0319116","resourceVersion":"394","creationTimestamp":"2023-08-21T10:53:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"e0b7f7a8-559b-44c2-879c-54813abddce8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e0b7f7a8-559b-44c2-879c-54813abddce8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0821 10:53:32.301032   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:32.301046   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:32.301056   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:32.301065   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:32.302682   97516 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0821 10:53:32.302696   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:32.302702   97516 round_trippers.go:580]     Audit-Id: d9048789-566c-4e1a-a9f1-3ddabbdfc493
	I0821 10:53:32.302709   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:32.302715   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:32.302720   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:32.302725   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:32.302732   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:32 GMT
	I0821 10:53:32.302895   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"388","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0821 10:53:32.803292   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-p7wfm
	I0821 10:53:32.803313   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:32.803321   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:32.803327   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:32.805748   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:32.805772   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:32.805782   97516 round_trippers.go:580]     Audit-Id: 8d262ead-856f-4e91-8755-a567f9a614e9
	I0821 10:53:32.805792   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:32.805800   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:32.805807   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:32.805815   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:32.805828   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:32 GMT
	I0821 10:53:32.805928   97516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-p7wfm","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"e31fc5d6-efb4-4659-95e0-45e4b0319116","resourceVersion":"394","creationTimestamp":"2023-08-21T10:53:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"e0b7f7a8-559b-44c2-879c-54813abddce8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e0b7f7a8-559b-44c2-879c-54813abddce8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0821 10:53:32.806334   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:32.806345   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:32.806352   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:32.806357   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:32.808269   97516 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0821 10:53:32.808285   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:32.808292   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:32.808297   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:32.808303   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:32 GMT
	I0821 10:53:32.808308   97516 round_trippers.go:580]     Audit-Id: e967c2f7-79d2-42a3-9902-5dc4d000f330
	I0821 10:53:32.808314   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:32.808322   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:32.808455   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"388","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0821 10:53:33.304091   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-p7wfm
	I0821 10:53:33.304111   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:33.304119   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:33.304126   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:33.306388   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:33.306412   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:33.306422   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:33.306431   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:33 GMT
	I0821 10:53:33.306439   97516 round_trippers.go:580]     Audit-Id: 9ca75ead-a9a9-4e09-978d-672d49d638aa
	I0821 10:53:33.306451   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:33.306465   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:33.306473   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:33.306563   97516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-p7wfm","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"e31fc5d6-efb4-4659-95e0-45e4b0319116","resourceVersion":"394","creationTimestamp":"2023-08-21T10:53:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"e0b7f7a8-559b-44c2-879c-54813abddce8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e0b7f7a8-559b-44c2-879c-54813abddce8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0821 10:53:33.307008   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:33.307020   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:33.307030   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:33.307038   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:33.309026   97516 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0821 10:53:33.309048   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:33.309058   97516 round_trippers.go:580]     Audit-Id: d6ff211b-5712-4d15-b715-73212c123320
	I0821 10:53:33.309068   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:33.309077   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:33.309090   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:33.309100   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:33.309110   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:33 GMT
	I0821 10:53:33.309214   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"388","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0821 10:53:33.803924   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-p7wfm
	I0821 10:53:33.803946   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:33.803954   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:33.803960   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:33.806295   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:33.806313   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:33.806323   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:33.806329   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:33 GMT
	I0821 10:53:33.806335   97516 round_trippers.go:580]     Audit-Id: 2f77d30a-a476-4db0-b34e-206a35f2b057
	I0821 10:53:33.806341   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:33.806349   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:33.806358   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:33.806478   97516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-p7wfm","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"e31fc5d6-efb4-4659-95e0-45e4b0319116","resourceVersion":"407","creationTimestamp":"2023-08-21T10:53:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"e0b7f7a8-559b-44c2-879c-54813abddce8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e0b7f7a8-559b-44c2-879c-54813abddce8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0821 10:53:33.806910   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:33.806921   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:33.806928   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:33.806933   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:33.808817   97516 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0821 10:53:33.808844   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:33.808853   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:33.808858   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:33.808863   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:33.808869   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:33.808875   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:33 GMT
	I0821 10:53:33.808880   97516 round_trippers.go:580]     Audit-Id: 96d2eb09-59c3-4d1c-b7d6-1cf592cca1c4
	I0821 10:53:33.808985   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"388","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0821 10:53:33.809248   97516 pod_ready.go:92] pod "coredns-5d78c9869d-p7wfm" in "kube-system" namespace has status "Ready":"True"
	I0821 10:53:33.809260   97516 pod_ready.go:81] duration metric: took 1.515485666s waiting for pod "coredns-5d78c9869d-p7wfm" in "kube-system" namespace to be "Ready" ...
	I0821 10:53:33.809268   97516 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-200985" in "kube-system" namespace to be "Ready" ...
	I0821 10:53:33.809308   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-200985
	I0821 10:53:33.809315   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:33.809322   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:33.809330   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:33.811064   97516 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0821 10:53:33.811079   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:33.811085   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:33.811090   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:33.811096   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:33.811103   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:33 GMT
	I0821 10:53:33.811112   97516 round_trippers.go:580]     Audit-Id: 89e9fc7f-04e8-4656-8056-7a34af614491
	I0821 10:53:33.811129   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:33.811215   97516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-200985","namespace":"kube-system","uid":"a157a5a3-4690-4eb4-9efd-f753499e5e11","resourceVersion":"260","creationTimestamp":"2023-08-21T10:52:47Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"9757e19a475fd7a8f263a89eaa2774b0","kubernetes.io/config.mirror":"9757e19a475fd7a8f263a89eaa2774b0","kubernetes.io/config.seen":"2023-08-21T10:52:47.336802644Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:52:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0821 10:53:33.811591   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:33.811604   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:33.811611   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:33.811617   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:33.813209   97516 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0821 10:53:33.813222   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:33.813229   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:33.813234   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:33.813240   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:33 GMT
	I0821 10:53:33.813245   97516 round_trippers.go:580]     Audit-Id: dcf7e101-fb40-4b45-8a25-6684d559fe38
	I0821 10:53:33.813251   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:33.813256   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:33.813360   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"388","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0821 10:53:33.813630   97516 pod_ready.go:92] pod "etcd-multinode-200985" in "kube-system" namespace has status "Ready":"True"
	I0821 10:53:33.813642   97516 pod_ready.go:81] duration metric: took 4.369881ms waiting for pod "etcd-multinode-200985" in "kube-system" namespace to be "Ready" ...
	I0821 10:53:33.813652   97516 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-200985" in "kube-system" namespace to be "Ready" ...
	I0821 10:53:33.813687   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-200985
	I0821 10:53:33.813694   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:33.813700   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:33.813706   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:33.815294   97516 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0821 10:53:33.815315   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:33.815325   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:33 GMT
	I0821 10:53:33.815334   97516 round_trippers.go:580]     Audit-Id: 4ed232f6-a840-4c8a-aa33-6ccdf15e6249
	I0821 10:53:33.815348   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:33.815372   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:33.815385   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:33.815398   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:33.815494   97516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-200985","namespace":"kube-system","uid":"0a22f07a-55fc-443f-b684-237b16409ed9","resourceVersion":"254","creationTimestamp":"2023-08-21T10:52:47Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"8e3ebedf03aabd7965c175800d660a23","kubernetes.io/config.mirror":"8e3ebedf03aabd7965c175800d660a23","kubernetes.io/config.seen":"2023-08-21T10:52:47.336793815Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:52:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0821 10:53:33.815855   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:33.815868   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:33.815878   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:33.815889   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:33.817549   97516 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0821 10:53:33.817566   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:33.817575   97516 round_trippers.go:580]     Audit-Id: 5c4e3264-76c8-4c7a-90f9-d21cdb296fb5
	I0821 10:53:33.817584   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:33.817592   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:33.817601   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:33.817609   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:33.817624   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:33 GMT
	I0821 10:53:33.817776   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"388","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0821 10:53:33.818040   97516 pod_ready.go:92] pod "kube-apiserver-multinode-200985" in "kube-system" namespace has status "Ready":"True"
	I0821 10:53:33.818052   97516 pod_ready.go:81] duration metric: took 4.39526ms waiting for pod "kube-apiserver-multinode-200985" in "kube-system" namespace to be "Ready" ...
	I0821 10:53:33.818061   97516 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-200985" in "kube-system" namespace to be "Ready" ...
	I0821 10:53:33.818103   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-200985
	I0821 10:53:33.818109   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:33.818116   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:33.818124   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:33.819679   97516 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0821 10:53:33.819694   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:33.819701   97516 round_trippers.go:580]     Audit-Id: cbf8b0ca-ab8f-4ea0-b5bd-9c0dc173e9b0
	I0821 10:53:33.819711   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:33.819719   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:33.819731   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:33.819738   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:33.819745   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:33 GMT
	I0821 10:53:33.819856   97516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-200985","namespace":"kube-system","uid":"9f23370f-54a6-415e-b146-ccd32e50df39","resourceVersion":"282","creationTimestamp":"2023-08-21T10:52:47Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"71757014aa83e6e2acd6644df67bac26","kubernetes.io/config.mirror":"71757014aa83e6e2acd6644df67bac26","kubernetes.io/config.seen":"2023-08-21T10:52:47.336799628Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:52:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0821 10:53:33.884437   97516 request.go:629] Waited for 64.204468ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:33.884514   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:33.884524   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:33.884536   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:33.884550   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:33.886794   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:33.886815   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:33.886835   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:33.886847   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:33.886860   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:33.886872   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:33 GMT
	I0821 10:53:33.886882   97516 round_trippers.go:580]     Audit-Id: be0511d5-8b21-4433-b2fb-689ad3e77d49
	I0821 10:53:33.886898   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:33.886998   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"388","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0821 10:53:33.887306   97516 pod_ready.go:92] pod "kube-controller-manager-multinode-200985" in "kube-system" namespace has status "Ready":"True"
	I0821 10:53:33.887323   97516 pod_ready.go:81] duration metric: took 69.252119ms waiting for pod "kube-controller-manager-multinode-200985" in "kube-system" namespace to be "Ready" ...
	I0821 10:53:33.887336   97516 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hr82h" in "kube-system" namespace to be "Ready" ...
	I0821 10:53:34.084763   97516 request.go:629] Waited for 197.335753ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hr82h
	I0821 10:53:34.084822   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hr82h
	I0821 10:53:34.084827   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:34.084834   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:34.084840   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:34.087147   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:34.087166   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:34.087173   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:34.087179   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:34 GMT
	I0821 10:53:34.087189   97516 round_trippers.go:580]     Audit-Id: f98fad18-546c-4da9-b111-fa2b725d7af9
	I0821 10:53:34.087197   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:34.087210   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:34.087217   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:34.087346   97516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hr82h","generateName":"kube-proxy-","namespace":"kube-system","uid":"3c4817fa-d083-4bc4-9e1b-a98f77433293","resourceVersion":"363","creationTimestamp":"2023-08-21T10:53:01Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d0c1d33f-52b4-4e5d-a101-812adc397df3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d0c1d33f-52b4-4e5d-a101-812adc397df3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0821 10:53:34.285132   97516 request.go:629] Waited for 197.340713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:34.285201   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:34.285208   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:34.285216   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:34.285227   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:34.287579   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:34.287601   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:34.287610   97516 round_trippers.go:580]     Audit-Id: 8b455596-2367-470d-aab8-2df16c62da2d
	I0821 10:53:34.287617   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:34.287625   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:34.287634   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:34.287643   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:34.287656   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:34 GMT
	I0821 10:53:34.287765   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"388","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0821 10:53:34.288077   97516 pod_ready.go:92] pod "kube-proxy-hr82h" in "kube-system" namespace has status "Ready":"True"
	I0821 10:53:34.288093   97516 pod_ready.go:81] duration metric: took 400.749803ms waiting for pod "kube-proxy-hr82h" in "kube-system" namespace to be "Ready" ...
	I0821 10:53:34.288105   97516 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-200985" in "kube-system" namespace to be "Ready" ...
	I0821 10:53:34.484519   97516 request.go:629] Waited for 196.355419ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-200985
	I0821 10:53:34.484574   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-200985
	I0821 10:53:34.484579   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:34.484589   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:34.484611   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:34.487089   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:34.487110   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:34.487119   97516 round_trippers.go:580]     Audit-Id: d5cb601d-54a3-4780-ac13-8a66dee94301
	I0821 10:53:34.487126   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:34.487133   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:34.487146   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:34.487158   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:34.487170   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:34 GMT
	I0821 10:53:34.487345   97516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-200985","namespace":"kube-system","uid":"1ac1d965-22f6-4c06-b04f-7cbfab581bbd","resourceVersion":"257","creationTimestamp":"2023-08-21T10:52:47Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"775aa5ce7b376581a0fd6b5e7ef37b50","kubernetes.io/config.mirror":"775aa5ce7b376581a0fd6b5e7ef37b50","kubernetes.io/config.seen":"2023-08-21T10:52:47.336801061Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:52:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0821 10:53:34.685157   97516 request.go:629] Waited for 197.388913ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:34.685230   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:53:34.685238   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:34.685252   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:34.685266   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:34.687935   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:34.687963   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:34.687972   97516 round_trippers.go:580]     Audit-Id: 639e52ed-0d8f-48a5-8d8a-882b96d9a9e7
	I0821 10:53:34.687981   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:34.687989   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:34.687998   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:34.688012   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:34.688025   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:34 GMT
	I0821 10:53:34.688182   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"388","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0821 10:53:34.688504   97516 pod_ready.go:92] pod "kube-scheduler-multinode-200985" in "kube-system" namespace has status "Ready":"True"
	I0821 10:53:34.688518   97516 pod_ready.go:81] duration metric: took 400.405687ms waiting for pod "kube-scheduler-multinode-200985" in "kube-system" namespace to be "Ready" ...
	I0821 10:53:34.688531   97516 pod_ready.go:38] duration metric: took 2.401633861s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0821 10:53:34.688550   97516 api_server.go:52] waiting for apiserver process to appear ...
	I0821 10:53:34.688610   97516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0821 10:53:34.698807   97516 command_runner.go:130] > 1444
	I0821 10:53:34.698841   97516 api_server.go:72] duration metric: took 33.049061255s to wait for apiserver process to appear ...
	I0821 10:53:34.698853   97516 api_server.go:88] waiting for apiserver healthz status ...
	I0821 10:53:34.698869   97516 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0821 10:53:34.703000   97516 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0821 10:53:34.703054   97516 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0821 10:53:34.703061   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:34.703069   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:34.703076   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:34.703983   97516 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0821 10:53:34.703999   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:34.704006   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:34.704012   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:34.704017   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:34.704029   97516 round_trippers.go:580]     Content-Length: 263
	I0821 10:53:34.704035   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:34 GMT
	I0821 10:53:34.704042   97516 round_trippers.go:580]     Audit-Id: fc0ad7bc-6619-42ab-b8a7-5868d6ab786d
	I0821 10:53:34.704048   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:34.704064   97516 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.4",
	  "gitCommit": "fa3d7990104d7c1f16943a67f11b154b71f6a132",
	  "gitTreeState": "clean",
	  "buildDate": "2023-07-19T12:14:49Z",
	  "goVersion": "go1.20.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0821 10:53:34.704132   97516 api_server.go:141] control plane version: v1.27.4
	I0821 10:53:34.704145   97516 api_server.go:131] duration metric: took 5.286955ms to wait for apiserver health ...
	I0821 10:53:34.704152   97516 system_pods.go:43] waiting for kube-system pods to appear ...
	I0821 10:53:34.884509   97516 request.go:629] Waited for 180.299332ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0821 10:53:34.884566   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0821 10:53:34.884571   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:34.884593   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:34.884602   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:34.887787   97516 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0821 10:53:34.887808   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:34.887820   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:34.887834   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:34.887844   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:34.887854   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:34 GMT
	I0821 10:53:34.887864   97516 round_trippers.go:580]     Audit-Id: 4d8ee8c9-b2e5-4669-9dec-298a2f0d54a3
	I0821 10:53:34.887874   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:34.888389   97516 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"412"},"items":[{"metadata":{"name":"coredns-5d78c9869d-p7wfm","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"e31fc5d6-efb4-4659-95e0-45e4b0319116","resourceVersion":"407","creationTimestamp":"2023-08-21T10:53:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"e0b7f7a8-559b-44c2-879c-54813abddce8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e0b7f7a8-559b-44c2-879c-54813abddce8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55613 chars]
	I0821 10:53:34.891282   97516 system_pods.go:59] 8 kube-system pods found
	I0821 10:53:34.891319   97516 system_pods.go:61] "coredns-5d78c9869d-p7wfm" [e31fc5d6-efb4-4659-95e0-45e4b0319116] Running
	I0821 10:53:34.891329   97516 system_pods.go:61] "etcd-multinode-200985" [a157a5a3-4690-4eb4-9efd-f753499e5e11] Running
	I0821 10:53:34.891343   97516 system_pods.go:61] "kindnet-l9qdc" [a4612ce4-c44d-48c7-88d3-a03b659ddef3] Running
	I0821 10:53:34.891349   97516 system_pods.go:61] "kube-apiserver-multinode-200985" [0a22f07a-55fc-443f-b684-237b16409ed9] Running
	I0821 10:53:34.891374   97516 system_pods.go:61] "kube-controller-manager-multinode-200985" [9f23370f-54a6-415e-b146-ccd32e50df39] Running
	I0821 10:53:34.891382   97516 system_pods.go:61] "kube-proxy-hr82h" [3c4817fa-d083-4bc4-9e1b-a98f77433293] Running
	I0821 10:53:34.891387   97516 system_pods.go:61] "kube-scheduler-multinode-200985" [1ac1d965-22f6-4c06-b04f-7cbfab581bbd] Running
	I0821 10:53:34.891391   97516 system_pods.go:61] "storage-provisioner" [eb07b693-169e-45aa-999e-989f9eb6ae77] Running
	I0821 10:53:34.891397   97516 system_pods.go:74] duration metric: took 187.238285ms to wait for pod list to return data ...
	I0821 10:53:34.891411   97516 default_sa.go:34] waiting for default service account to be created ...
	I0821 10:53:35.084338   97516 request.go:629] Waited for 192.836643ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0821 10:53:35.084407   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0821 10:53:35.084414   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:35.084426   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:35.084441   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:35.086719   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:35.086745   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:35.086753   97516 round_trippers.go:580]     Content-Length: 261
	I0821 10:53:35.086758   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:35 GMT
	I0821 10:53:35.086764   97516 round_trippers.go:580]     Audit-Id: da7d69d1-36af-4f13-af40-57e412a21e63
	I0821 10:53:35.086769   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:35.086775   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:35.086781   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:35.086789   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:35.086819   97516 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"412"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"0a1183dd-7b4d-43dd-803c-6636dbc28c62","resourceVersion":"312","creationTimestamp":"2023-08-21T10:53:01Z"}}]}
	I0821 10:53:35.087051   97516 default_sa.go:45] found service account: "default"
	I0821 10:53:35.087067   97516 default_sa.go:55] duration metric: took 195.649376ms for default service account to be created ...
	I0821 10:53:35.087076   97516 system_pods.go:116] waiting for k8s-apps to be running ...
	I0821 10:53:35.284526   97516 request.go:629] Waited for 197.360619ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0821 10:53:35.284583   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0821 10:53:35.284592   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:35.284609   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:35.284623   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:35.288035   97516 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0821 10:53:35.288066   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:35.288075   97516 round_trippers.go:580]     Audit-Id: a3676c23-e022-4a64-8b0d-c0c40e410d9a
	I0821 10:53:35.288084   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:35.288094   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:35.288104   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:35.288113   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:35.288123   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:35 GMT
	I0821 10:53:35.288563   97516 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"412"},"items":[{"metadata":{"name":"coredns-5d78c9869d-p7wfm","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"e31fc5d6-efb4-4659-95e0-45e4b0319116","resourceVersion":"407","creationTimestamp":"2023-08-21T10:53:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"e0b7f7a8-559b-44c2-879c-54813abddce8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e0b7f7a8-559b-44c2-879c-54813abddce8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55613 chars]
	I0821 10:53:35.291082   97516 system_pods.go:86] 8 kube-system pods found
	I0821 10:53:35.291107   97516 system_pods.go:89] "coredns-5d78c9869d-p7wfm" [e31fc5d6-efb4-4659-95e0-45e4b0319116] Running
	I0821 10:53:35.291115   97516 system_pods.go:89] "etcd-multinode-200985" [a157a5a3-4690-4eb4-9efd-f753499e5e11] Running
	I0821 10:53:35.291122   97516 system_pods.go:89] "kindnet-l9qdc" [a4612ce4-c44d-48c7-88d3-a03b659ddef3] Running
	I0821 10:53:35.291132   97516 system_pods.go:89] "kube-apiserver-multinode-200985" [0a22f07a-55fc-443f-b684-237b16409ed9] Running
	I0821 10:53:35.291139   97516 system_pods.go:89] "kube-controller-manager-multinode-200985" [9f23370f-54a6-415e-b146-ccd32e50df39] Running
	I0821 10:53:35.291144   97516 system_pods.go:89] "kube-proxy-hr82h" [3c4817fa-d083-4bc4-9e1b-a98f77433293] Running
	I0821 10:53:35.291148   97516 system_pods.go:89] "kube-scheduler-multinode-200985" [1ac1d965-22f6-4c06-b04f-7cbfab581bbd] Running
	I0821 10:53:35.291158   97516 system_pods.go:89] "storage-provisioner" [eb07b693-169e-45aa-999e-989f9eb6ae77] Running
	I0821 10:53:35.291166   97516 system_pods.go:126] duration metric: took 204.079529ms to wait for k8s-apps to be running ...
	I0821 10:53:35.291179   97516 system_svc.go:44] waiting for kubelet service to be running ....
	I0821 10:53:35.291228   97516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0821 10:53:35.303347   97516 system_svc.go:56] duration metric: took 12.16221ms WaitForService to wait for kubelet.
	I0821 10:53:35.303395   97516 kubeadm.go:581] duration metric: took 33.65361372s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0821 10:53:35.303421   97516 node_conditions.go:102] verifying NodePressure condition ...
	I0821 10:53:35.484864   97516 request.go:629] Waited for 181.348249ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0821 10:53:35.484912   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0821 10:53:35.484917   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:35.484924   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:35.484930   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:35.487434   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:35.487455   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:35.487466   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:35.487475   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:35 GMT
	I0821 10:53:35.487485   97516 round_trippers.go:580]     Audit-Id: db2471a6-887a-4219-9bfa-6658b35feab3
	I0821 10:53:35.487494   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:35.487507   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:35.487518   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:35.487661   97516 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"412"},"items":[{"metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"388","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6000 chars]
	I0821 10:53:35.488050   97516 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0821 10:53:35.488069   97516 node_conditions.go:123] node cpu capacity is 8
	I0821 10:53:35.488082   97516 node_conditions.go:105] duration metric: took 184.655259ms to run NodePressure ...
	I0821 10:53:35.488094   97516 start.go:228] waiting for startup goroutines ...
	I0821 10:53:35.488104   97516 start.go:233] waiting for cluster config update ...
	I0821 10:53:35.488125   97516 start.go:242] writing updated cluster config ...
	I0821 10:53:35.490605   97516 out.go:177] 
	I0821 10:53:35.492349   97516 config.go:182] Loaded profile config "multinode-200985": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0821 10:53:35.492460   97516 profile.go:148] Saving config to /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/config.json ...
	I0821 10:53:35.494444   97516 out.go:177] * Starting worker node multinode-200985-m02 in cluster multinode-200985
	I0821 10:53:35.495703   97516 cache.go:122] Beginning downloading kic base image for docker with crio
	I0821 10:53:35.497192   97516 out.go:177] * Pulling base image ...
	I0821 10:53:35.498972   97516 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0821 10:53:35.498995   97516 cache.go:57] Caching tarball of preloaded images
	I0821 10:53:35.499051   97516 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0821 10:53:35.499093   97516 preload.go:174] Found /home/jenkins/minikube-integration/17102-5717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0821 10:53:35.499107   97516 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on crio
	I0821 10:53:35.499179   97516 profile.go:148] Saving config to /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/config.json ...
	I0821 10:53:35.514724   97516 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0821 10:53:35.514743   97516 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0821 10:53:35.514771   97516 cache.go:195] Successfully downloaded all kic artifacts
	I0821 10:53:35.514801   97516 start.go:365] acquiring machines lock for multinode-200985-m02: {Name:mkb052bb2170979030aa772f68a6880ebfef96f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 10:53:35.514911   97516 start.go:369] acquired machines lock for "multinode-200985-m02" in 80.443µs
	I0821 10:53:35.514942   97516 start.go:93] Provisioning new machine with config: &{Name:multinode-200985 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-200985 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name:m02 IP: Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0821 10:53:35.515019   97516 start.go:125] createHost starting for "m02" (driver="docker")
	I0821 10:53:35.517364   97516 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0821 10:53:35.517468   97516 start.go:159] libmachine.API.Create for "multinode-200985" (driver="docker")
	I0821 10:53:35.517495   97516 client.go:168] LocalClient.Create starting
	I0821 10:53:35.517565   97516 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem
	I0821 10:53:35.517593   97516 main.go:141] libmachine: Decoding PEM data...
	I0821 10:53:35.517607   97516 main.go:141] libmachine: Parsing certificate...
	I0821 10:53:35.517656   97516 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17102-5717/.minikube/certs/cert.pem
	I0821 10:53:35.517673   97516 main.go:141] libmachine: Decoding PEM data...
	I0821 10:53:35.517685   97516 main.go:141] libmachine: Parsing certificate...
	I0821 10:53:35.517908   97516 cli_runner.go:164] Run: docker network inspect multinode-200985 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0821 10:53:35.532952   97516 network_create.go:76] Found existing network {name:multinode-200985 subnet:0xc0015aacc0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0821 10:53:35.533002   97516 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-200985-m02" container
	I0821 10:53:35.533069   97516 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0821 10:53:35.548917   97516 cli_runner.go:164] Run: docker volume create multinode-200985-m02 --label name.minikube.sigs.k8s.io=multinode-200985-m02 --label created_by.minikube.sigs.k8s.io=true
	I0821 10:53:35.565244   97516 oci.go:103] Successfully created a docker volume multinode-200985-m02
	I0821 10:53:35.565308   97516 cli_runner.go:164] Run: docker run --rm --name multinode-200985-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-200985-m02 --entrypoint /usr/bin/test -v multinode-200985-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0821 10:53:36.027484   97516 oci.go:107] Successfully prepared a docker volume multinode-200985-m02
	I0821 10:53:36.027521   97516 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0821 10:53:36.027539   97516 kic.go:190] Starting extracting preloaded images to volume ...
	I0821 10:53:36.027590   97516 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17102-5717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-200985-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0821 10:53:40.804528   97516 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17102-5717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-200985-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.776896541s)
	I0821 10:53:40.804561   97516 kic.go:199] duration metric: took 4.777017 seconds to extract preloaded images to volume
	W0821 10:53:40.804693   97516 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0821 10:53:40.804837   97516 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0821 10:53:40.857849   97516 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-200985-m02 --name multinode-200985-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-200985-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-200985-m02 --network multinode-200985 --ip 192.168.58.3 --volume multinode-200985-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0821 10:53:41.140334   97516 cli_runner.go:164] Run: docker container inspect multinode-200985-m02 --format={{.State.Running}}
	I0821 10:53:41.157253   97516 cli_runner.go:164] Run: docker container inspect multinode-200985-m02 --format={{.State.Status}}
	I0821 10:53:41.174252   97516 cli_runner.go:164] Run: docker exec multinode-200985-m02 stat /var/lib/dpkg/alternatives/iptables
	I0821 10:53:41.231118   97516 oci.go:144] the created container "multinode-200985-m02" has a running status.
	I0821 10:53:41.231175   97516 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17102-5717/.minikube/machines/multinode-200985-m02/id_rsa...
	I0821 10:53:41.502385   97516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/machines/multinode-200985-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0821 10:53:41.502428   97516 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17102-5717/.minikube/machines/multinode-200985-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0821 10:53:41.521497   97516 cli_runner.go:164] Run: docker container inspect multinode-200985-m02 --format={{.State.Status}}
	I0821 10:53:41.540088   97516 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0821 10:53:41.540109   97516 kic_runner.go:114] Args: [docker exec --privileged multinode-200985-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0821 10:53:41.611608   97516 cli_runner.go:164] Run: docker container inspect multinode-200985-m02 --format={{.State.Status}}
	I0821 10:53:41.626598   97516 machine.go:88] provisioning docker machine ...
	I0821 10:53:41.626633   97516 ubuntu.go:169] provisioning hostname "multinode-200985-m02"
	I0821 10:53:41.626691   97516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-200985-m02
	I0821 10:53:41.645045   97516 main.go:141] libmachine: Using SSH client type: native
	I0821 10:53:41.645734   97516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0821 10:53:41.645761   97516 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-200985-m02 && echo "multinode-200985-m02" | sudo tee /etc/hostname
	I0821 10:53:41.805893   97516 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-200985-m02
	
	I0821 10:53:41.805984   97516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-200985-m02
	I0821 10:53:41.823323   97516 main.go:141] libmachine: Using SSH client type: native
	I0821 10:53:41.823736   97516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0821 10:53:41.823757   97516 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-200985-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-200985-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-200985-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0821 10:53:41.947201   97516 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0821 10:53:41.947230   97516 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17102-5717/.minikube CaCertPath:/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17102-5717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17102-5717/.minikube}
	I0821 10:53:41.947251   97516 ubuntu.go:177] setting up certificates
	I0821 10:53:41.947269   97516 provision.go:83] configureAuth start
	I0821 10:53:41.947324   97516 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-200985-m02
	I0821 10:53:41.963215   97516 provision.go:138] copyHostCerts
	I0821 10:53:41.963248   97516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17102-5717/.minikube/ca.pem
	I0821 10:53:41.963276   97516 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-5717/.minikube/ca.pem, removing ...
	I0821 10:53:41.963284   97516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-5717/.minikube/ca.pem
	I0821 10:53:41.963348   97516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17102-5717/.minikube/ca.pem (1078 bytes)
	I0821 10:53:41.963452   97516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17102-5717/.minikube/cert.pem
	I0821 10:53:41.963476   97516 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-5717/.minikube/cert.pem, removing ...
	I0821 10:53:41.963483   97516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-5717/.minikube/cert.pem
	I0821 10:53:41.963509   97516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17102-5717/.minikube/cert.pem (1123 bytes)
	I0821 10:53:41.963556   97516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17102-5717/.minikube/key.pem
	I0821 10:53:41.963573   97516 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-5717/.minikube/key.pem, removing ...
	I0821 10:53:41.963579   97516 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-5717/.minikube/key.pem
	I0821 10:53:41.963599   97516 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17102-5717/.minikube/key.pem (1675 bytes)
	I0821 10:53:41.963644   97516 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17102-5717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca-key.pem org=jenkins.multinode-200985-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-200985-m02]
	I0821 10:53:42.201012   97516 provision.go:172] copyRemoteCerts
	I0821 10:53:42.201077   97516 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0821 10:53:42.201120   97516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-200985-m02
	I0821 10:53:42.217355   97516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/multinode-200985-m02/id_rsa Username:docker}
	I0821 10:53:42.307515   97516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0821 10:53:42.307569   97516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0821 10:53:42.328031   97516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0821 10:53:42.328095   97516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0821 10:53:42.347993   97516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0821 10:53:42.348039   97516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0821 10:53:42.368239   97516 provision.go:86] duration metric: configureAuth took 420.955743ms
	I0821 10:53:42.368265   97516 ubuntu.go:193] setting minikube options for container-runtime
	I0821 10:53:42.368425   97516 config.go:182] Loaded profile config "multinode-200985": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0821 10:53:42.368511   97516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-200985-m02
	I0821 10:53:42.383665   97516 main.go:141] libmachine: Using SSH client type: native
	I0821 10:53:42.384087   97516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0821 10:53:42.384111   97516 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0821 10:53:42.596862   97516 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0821 10:53:42.596887   97516 machine.go:91] provisioned docker machine in 970.267662ms
	I0821 10:53:42.596897   97516 client.go:171] LocalClient.Create took 7.079393998s
	I0821 10:53:42.596920   97516 start.go:167] duration metric: libmachine.API.Create for "multinode-200985" took 7.079452583s
	I0821 10:53:42.596929   97516 start.go:300] post-start starting for "multinode-200985-m02" (driver="docker")
	I0821 10:53:42.596945   97516 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0821 10:53:42.597007   97516 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0821 10:53:42.597055   97516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-200985-m02
	I0821 10:53:42.612489   97516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/multinode-200985-m02/id_rsa Username:docker}
	I0821 10:53:42.708321   97516 ssh_runner.go:195] Run: cat /etc/os-release
	I0821 10:53:42.711011   97516 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.2 LTS"
	I0821 10:53:42.711023   97516 command_runner.go:130] > NAME="Ubuntu"
	I0821 10:53:42.711029   97516 command_runner.go:130] > VERSION_ID="22.04"
	I0821 10:53:42.711034   97516 command_runner.go:130] > VERSION="22.04.2 LTS (Jammy Jellyfish)"
	I0821 10:53:42.711039   97516 command_runner.go:130] > VERSION_CODENAME=jammy
	I0821 10:53:42.711043   97516 command_runner.go:130] > ID=ubuntu
	I0821 10:53:42.711046   97516 command_runner.go:130] > ID_LIKE=debian
	I0821 10:53:42.711051   97516 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0821 10:53:42.711058   97516 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0821 10:53:42.711064   97516 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0821 10:53:42.711079   97516 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0821 10:53:42.711083   97516 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0821 10:53:42.711135   97516 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0821 10:53:42.711157   97516 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0821 10:53:42.711167   97516 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0821 10:53:42.711175   97516 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0821 10:53:42.711186   97516 filesync.go:126] Scanning /home/jenkins/minikube-integration/17102-5717/.minikube/addons for local assets ...
	I0821 10:53:42.711229   97516 filesync.go:126] Scanning /home/jenkins/minikube-integration/17102-5717/.minikube/files for local assets ...
	I0821 10:53:42.711289   97516 filesync.go:149] local asset: /home/jenkins/minikube-integration/17102-5717/.minikube/files/etc/ssl/certs/124602.pem -> 124602.pem in /etc/ssl/certs
	I0821 10:53:42.711300   97516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/files/etc/ssl/certs/124602.pem -> /etc/ssl/certs/124602.pem
	I0821 10:53:42.711411   97516 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0821 10:53:42.718597   97516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/files/etc/ssl/certs/124602.pem --> /etc/ssl/certs/124602.pem (1708 bytes)
	I0821 10:53:42.739389   97516 start.go:303] post-start completed in 142.446002ms
	I0821 10:53:42.739739   97516 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-200985-m02
	I0821 10:53:42.756102   97516 profile.go:148] Saving config to /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/config.json ...
	I0821 10:53:42.756392   97516 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0821 10:53:42.756451   97516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-200985-m02
	I0821 10:53:42.772344   97516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/multinode-200985-m02/id_rsa Username:docker}
	I0821 10:53:42.859990   97516 command_runner.go:130] > 19%!
	(MISSING)I0821 10:53:42.860067   97516 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0821 10:53:42.864096   97516 command_runner.go:130] > 237G
	I0821 10:53:42.864118   97516 start.go:128] duration metric: createHost completed in 7.349091521s
	I0821 10:53:42.864130   97516 start.go:83] releasing machines lock for "multinode-200985-m02", held for 7.349204097s
	I0821 10:53:42.864191   97516 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-200985-m02
	I0821 10:53:42.882164   97516 out.go:177] * Found network options:
	I0821 10:53:42.883563   97516 out.go:177]   - NO_PROXY=192.168.58.2
	W0821 10:53:42.884990   97516 proxy.go:119] fail to check proxy env: Error ip not in block
	W0821 10:53:42.885026   97516 proxy.go:119] fail to check proxy env: Error ip not in block
	I0821 10:53:42.885094   97516 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0821 10:53:42.885139   97516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-200985-m02
	I0821 10:53:42.885163   97516 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0821 10:53:42.885247   97516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-200985-m02
	I0821 10:53:42.900952   97516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/multinode-200985-m02/id_rsa Username:docker}
	I0821 10:53:42.901437   97516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/multinode-200985-m02/id_rsa Username:docker}
	I0821 10:53:43.076529   97516 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0821 10:53:43.119792   97516 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0821 10:53:43.123554   97516 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0821 10:53:43.123581   97516 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0821 10:53:43.123592   97516 command_runner.go:130] > Device: b0h/176d	Inode: 540046      Links: 1
	I0821 10:53:43.123602   97516 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0821 10:53:43.123611   97516 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0821 10:53:43.123622   97516 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0821 10:53:43.123634   97516 command_runner.go:130] > Change: 2023-08-21 10:33:50.706032318 +0000
	I0821 10:53:43.123646   97516 command_runner.go:130] >  Birth: 2023-08-21 10:33:50.706032318 +0000
	I0821 10:53:43.123813   97516 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0821 10:53:43.140122   97516 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0821 10:53:43.140196   97516 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0821 10:53:43.164643   97516 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0821 10:53:43.164676   97516 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0821 10:53:43.164685   97516 start.go:466] detecting cgroup driver to use...
	I0821 10:53:43.164731   97516 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0821 10:53:43.164784   97516 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0821 10:53:43.177830   97516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0821 10:53:43.187290   97516 docker.go:196] disabling cri-docker service (if available) ...
	I0821 10:53:43.187344   97516 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0821 10:53:43.198926   97516 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0821 10:53:43.210522   97516 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0821 10:53:43.291695   97516 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0821 10:53:43.304894   97516 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0821 10:53:43.372636   97516 docker.go:212] disabling docker service ...
	I0821 10:53:43.372690   97516 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0821 10:53:43.390726   97516 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0821 10:53:43.401465   97516 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0821 10:53:43.480622   97516 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0821 10:53:43.480689   97516 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0821 10:53:43.560452   97516 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0821 10:53:43.560525   97516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0821 10:53:43.570842   97516 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0821 10:53:43.584176   97516 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0821 10:53:43.584890   97516 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0821 10:53:43.584939   97516 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 10:53:43.593496   97516 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0821 10:53:43.593560   97516 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 10:53:43.602074   97516 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 10:53:43.610564   97516 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 10:53:43.618858   97516 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0821 10:53:43.626695   97516 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0821 10:53:43.633407   97516 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0821 10:53:43.634145   97516 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0821 10:53:43.641579   97516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 10:53:43.718898   97516 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0821 10:53:43.816549   97516 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0821 10:53:43.816623   97516 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0821 10:53:43.819815   97516 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0821 10:53:43.819839   97516 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0821 10:53:43.819851   97516 command_runner.go:130] > Device: b9h/185d	Inode: 186         Links: 1
	I0821 10:53:43.819858   97516 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0821 10:53:43.819863   97516 command_runner.go:130] > Access: 2023-08-21 10:53:43.804504619 +0000
	I0821 10:53:43.819869   97516 command_runner.go:130] > Modify: 2023-08-21 10:53:43.804504619 +0000
	I0821 10:53:43.819874   97516 command_runner.go:130] > Change: 2023-08-21 10:53:43.804504619 +0000
	I0821 10:53:43.819885   97516 command_runner.go:130] >  Birth: -
	I0821 10:53:43.819926   97516 start.go:534] Will wait 60s for crictl version
	I0821 10:53:43.819972   97516 ssh_runner.go:195] Run: which crictl
	I0821 10:53:43.823041   97516 command_runner.go:130] > /usr/bin/crictl
	I0821 10:53:43.823117   97516 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0821 10:53:43.855051   97516 command_runner.go:130] > Version:  0.1.0
	I0821 10:53:43.855069   97516 command_runner.go:130] > RuntimeName:  cri-o
	I0821 10:53:43.855073   97516 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0821 10:53:43.855079   97516 command_runner.go:130] > RuntimeApiVersion:  v1
	I0821 10:53:43.855096   97516 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0821 10:53:43.855156   97516 ssh_runner.go:195] Run: crio --version
	I0821 10:53:43.886088   97516 command_runner.go:130] > crio version 1.24.6
	I0821 10:53:43.886109   97516 command_runner.go:130] > Version:          1.24.6
	I0821 10:53:43.886131   97516 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0821 10:53:43.886139   97516 command_runner.go:130] > GitTreeState:     clean
	I0821 10:53:43.886150   97516 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0821 10:53:43.886163   97516 command_runner.go:130] > GoVersion:        go1.18.2
	I0821 10:53:43.886169   97516 command_runner.go:130] > Compiler:         gc
	I0821 10:53:43.886177   97516 command_runner.go:130] > Platform:         linux/amd64
	I0821 10:53:43.886191   97516 command_runner.go:130] > Linkmode:         dynamic
	I0821 10:53:43.886206   97516 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0821 10:53:43.886216   97516 command_runner.go:130] > SeccompEnabled:   true
	I0821 10:53:43.886226   97516 command_runner.go:130] > AppArmorEnabled:  false
	I0821 10:53:43.887432   97516 ssh_runner.go:195] Run: crio --version
	I0821 10:53:43.917196   97516 command_runner.go:130] > crio version 1.24.6
	I0821 10:53:43.917221   97516 command_runner.go:130] > Version:          1.24.6
	I0821 10:53:43.917231   97516 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0821 10:53:43.917237   97516 command_runner.go:130] > GitTreeState:     clean
	I0821 10:53:43.917245   97516 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0821 10:53:43.917252   97516 command_runner.go:130] > GoVersion:        go1.18.2
	I0821 10:53:43.917258   97516 command_runner.go:130] > Compiler:         gc
	I0821 10:53:43.917267   97516 command_runner.go:130] > Platform:         linux/amd64
	I0821 10:53:43.917281   97516 command_runner.go:130] > Linkmode:         dynamic
	I0821 10:53:43.917299   97516 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0821 10:53:43.917309   97516 command_runner.go:130] > SeccompEnabled:   true
	I0821 10:53:43.917319   97516 command_runner.go:130] > AppArmorEnabled:  false
	I0821 10:53:43.920421   97516 out.go:177] * Preparing Kubernetes v1.27.4 on CRI-O 1.24.6 ...
	I0821 10:53:43.922050   97516 out.go:177]   - env NO_PROXY=192.168.58.2
	I0821 10:53:43.923449   97516 cli_runner.go:164] Run: docker network inspect multinode-200985 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0821 10:53:43.939624   97516 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0821 10:53:43.943184   97516 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0821 10:53:43.952535   97516 certs.go:56] Setting up /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985 for IP: 192.168.58.3
	I0821 10:53:43.952568   97516 certs.go:190] acquiring lock for shared ca certs: {Name:mkb88db7eb1befc1f1b3279575458c71b2313cc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 10:53:43.952701   97516 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17102-5717/.minikube/ca.key
	I0821 10:53:43.952743   97516 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17102-5717/.minikube/proxy-client-ca.key
	I0821 10:53:43.952756   97516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0821 10:53:43.952767   97516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0821 10:53:43.952778   97516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0821 10:53:43.952791   97516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0821 10:53:43.952835   97516 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/home/jenkins/minikube-integration/17102-5717/.minikube/certs/12460.pem (1338 bytes)
	W0821 10:53:43.952863   97516 certs.go:433] ignoring /home/jenkins/minikube-integration/17102-5717/.minikube/certs/home/jenkins/minikube-integration/17102-5717/.minikube/certs/12460_empty.pem, impossibly tiny 0 bytes
	I0821 10:53:43.952874   97516 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca-key.pem (1679 bytes)
	I0821 10:53:43.952896   97516 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem (1078 bytes)
	I0821 10:53:43.952919   97516 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/home/jenkins/minikube-integration/17102-5717/.minikube/certs/cert.pem (1123 bytes)
	I0821 10:53:43.952941   97516 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/home/jenkins/minikube-integration/17102-5717/.minikube/certs/key.pem (1675 bytes)
	I0821 10:53:43.952978   97516 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-5717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17102-5717/.minikube/files/etc/ssl/certs/124602.pem (1708 bytes)
	I0821 10:53:43.953001   97516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/files/etc/ssl/certs/124602.pem -> /usr/share/ca-certificates/124602.pem
	I0821 10:53:43.953017   97516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0821 10:53:43.953028   97516 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/12460.pem -> /usr/share/ca-certificates/12460.pem
	I0821 10:53:43.953326   97516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0821 10:53:43.973385   97516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0821 10:53:43.994013   97516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0821 10:53:44.014234   97516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0821 10:53:44.033788   97516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/files/etc/ssl/certs/124602.pem --> /usr/share/ca-certificates/124602.pem (1708 bytes)
	I0821 10:53:44.054099   97516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0821 10:53:44.073669   97516 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/certs/12460.pem --> /usr/share/ca-certificates/12460.pem (1338 bytes)
	I0821 10:53:44.094263   97516 ssh_runner.go:195] Run: openssl version
	I0821 10:53:44.098742   97516 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0821 10:53:44.098881   97516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/124602.pem && ln -fs /usr/share/ca-certificates/124602.pem /etc/ssl/certs/124602.pem"
	I0821 10:53:44.106850   97516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/124602.pem
	I0821 10:53:44.109665   97516 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 21 10:39 /usr/share/ca-certificates/124602.pem
	I0821 10:53:44.109690   97516 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 21 10:39 /usr/share/ca-certificates/124602.pem
	I0821 10:53:44.109724   97516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/124602.pem
	I0821 10:53:44.115333   97516 command_runner.go:130] > 3ec20f2e
	I0821 10:53:44.115512   97516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/124602.pem /etc/ssl/certs/3ec20f2e.0"
	I0821 10:53:44.123172   97516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0821 10:53:44.130869   97516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0821 10:53:44.134085   97516 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 21 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0821 10:53:44.134118   97516 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 21 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0821 10:53:44.134149   97516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0821 10:53:44.139764   97516 command_runner.go:130] > b5213941
	I0821 10:53:44.140043   97516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0821 10:53:44.147586   97516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12460.pem && ln -fs /usr/share/ca-certificates/12460.pem /etc/ssl/certs/12460.pem"
	I0821 10:53:44.155501   97516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12460.pem
	I0821 10:53:44.158289   97516 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 21 10:39 /usr/share/ca-certificates/12460.pem
	I0821 10:53:44.158319   97516 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 21 10:39 /usr/share/ca-certificates/12460.pem
	I0821 10:53:44.158386   97516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12460.pem
	I0821 10:53:44.164176   97516 command_runner.go:130] > 51391683
	I0821 10:53:44.164227   97516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12460.pem /etc/ssl/certs/51391683.0"
	I0821 10:53:44.172028   97516 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0821 10:53:44.174812   97516 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0821 10:53:44.174845   97516 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0821 10:53:44.174916   97516 ssh_runner.go:195] Run: crio config
	I0821 10:53:44.212264   97516 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0821 10:53:44.212290   97516 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0821 10:53:44.212300   97516 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0821 10:53:44.212311   97516 command_runner.go:130] > #
	I0821 10:53:44.212323   97516 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0821 10:53:44.212334   97516 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0821 10:53:44.212348   97516 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0821 10:53:44.212362   97516 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0821 10:53:44.212377   97516 command_runner.go:130] > # reload'.
	I0821 10:53:44.212392   97516 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0821 10:53:44.212405   97516 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0821 10:53:44.212418   97516 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0821 10:53:44.212432   97516 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0821 10:53:44.212440   97516 command_runner.go:130] > [crio]
	I0821 10:53:44.212451   97516 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0821 10:53:44.212462   97516 command_runner.go:130] > # containers images, in this directory.
	I0821 10:53:44.212475   97516 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0821 10:53:44.212490   97516 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0821 10:53:44.212498   97516 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0821 10:53:44.212507   97516 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0821 10:53:44.212517   97516 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0821 10:53:44.212525   97516 command_runner.go:130] > # storage_driver = "vfs"
	I0821 10:53:44.212538   97516 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0821 10:53:44.212551   97516 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0821 10:53:44.212561   97516 command_runner.go:130] > # storage_option = [
	I0821 10:53:44.212567   97516 command_runner.go:130] > # ]
	I0821 10:53:44.212581   97516 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0821 10:53:44.212591   97516 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0821 10:53:44.212602   97516 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0821 10:53:44.212612   97516 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0821 10:53:44.212629   97516 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0821 10:53:44.212640   97516 command_runner.go:130] > # always happen on a node reboot
	I0821 10:53:44.212651   97516 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0821 10:53:44.212662   97516 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0821 10:53:44.212676   97516 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0821 10:53:44.212694   97516 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0821 10:53:44.212706   97516 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0821 10:53:44.212721   97516 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0821 10:53:44.212738   97516 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0821 10:53:44.212750   97516 command_runner.go:130] > # internal_wipe = true
	I0821 10:53:44.212762   97516 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0821 10:53:44.212774   97516 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0821 10:53:44.212792   97516 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0821 10:53:44.212801   97516 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0821 10:53:44.212815   97516 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0821 10:53:44.212822   97516 command_runner.go:130] > [crio.api]
	I0821 10:53:44.212834   97516 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0821 10:53:44.212845   97516 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0821 10:53:44.212854   97516 command_runner.go:130] > # IP address on which the stream server will listen.
	I0821 10:53:44.212861   97516 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0821 10:53:44.212872   97516 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0821 10:53:44.212884   97516 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0821 10:53:44.212892   97516 command_runner.go:130] > # stream_port = "0"
	I0821 10:53:44.212901   97516 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0821 10:53:44.212913   97516 command_runner.go:130] > # stream_enable_tls = false
	I0821 10:53:44.212924   97516 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0821 10:53:44.212934   97516 command_runner.go:130] > # stream_idle_timeout = ""
	I0821 10:53:44.212946   97516 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0821 10:53:44.212960   97516 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0821 10:53:44.212969   97516 command_runner.go:130] > # minutes.
	I0821 10:53:44.212975   97516 command_runner.go:130] > # stream_tls_cert = ""
	I0821 10:53:44.212986   97516 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0821 10:53:44.213001   97516 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0821 10:53:44.213009   97516 command_runner.go:130] > # stream_tls_key = ""
	I0821 10:53:44.213020   97516 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0821 10:53:44.213035   97516 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0821 10:53:44.213046   97516 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0821 10:53:44.213054   97516 command_runner.go:130] > # stream_tls_ca = ""
	I0821 10:53:44.213070   97516 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0821 10:53:44.213077   97516 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0821 10:53:44.213089   97516 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0821 10:53:44.213101   97516 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0821 10:53:44.213177   97516 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0821 10:53:44.213192   97516 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0821 10:53:44.213199   97516 command_runner.go:130] > [crio.runtime]
	I0821 10:53:44.213213   97516 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0821 10:53:44.213228   97516 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0821 10:53:44.213236   97516 command_runner.go:130] > # "nofile=1024:2048"
	I0821 10:53:44.213246   97516 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0821 10:53:44.213254   97516 command_runner.go:130] > # default_ulimits = [
	I0821 10:53:44.213264   97516 command_runner.go:130] > # ]
	I0821 10:53:44.213274   97516 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0821 10:53:44.213286   97516 command_runner.go:130] > # no_pivot = false
	I0821 10:53:44.213296   97516 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0821 10:53:44.213309   97516 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0821 10:53:44.213317   97516 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0821 10:53:44.213328   97516 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0821 10:53:44.213341   97516 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0821 10:53:44.213355   97516 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0821 10:53:44.213365   97516 command_runner.go:130] > # conmon = ""
	I0821 10:53:44.213380   97516 command_runner.go:130] > # Cgroup setting for conmon
	I0821 10:53:44.213391   97516 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0821 10:53:44.213398   97516 command_runner.go:130] > conmon_cgroup = "pod"
	I0821 10:53:44.213409   97516 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0821 10:53:44.213421   97516 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0821 10:53:44.213434   97516 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0821 10:53:44.213444   97516 command_runner.go:130] > # conmon_env = [
	I0821 10:53:44.213449   97516 command_runner.go:130] > # ]
	I0821 10:53:44.213459   97516 command_runner.go:130] > # Additional environment variables to set for all the
	I0821 10:53:44.213473   97516 command_runner.go:130] > # containers. These are overridden if set in the
	I0821 10:53:44.213482   97516 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0821 10:53:44.213486   97516 command_runner.go:130] > # default_env = [
	I0821 10:53:44.213491   97516 command_runner.go:130] > # ]
	I0821 10:53:44.213497   97516 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0821 10:53:44.213503   97516 command_runner.go:130] > # selinux = false
	I0821 10:53:44.213509   97516 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0821 10:53:44.213517   97516 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0821 10:53:44.213523   97516 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0821 10:53:44.213529   97516 command_runner.go:130] > # seccomp_profile = ""
	I0821 10:53:44.213535   97516 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0821 10:53:44.213551   97516 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0821 10:53:44.213558   97516 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0821 10:53:44.213565   97516 command_runner.go:130] > # which might increase security.
	I0821 10:53:44.213570   97516 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0821 10:53:44.213579   97516 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0821 10:53:44.213587   97516 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0821 10:53:44.213595   97516 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0821 10:53:44.213602   97516 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0821 10:53:44.213609   97516 command_runner.go:130] > # This option supports live configuration reload.
	I0821 10:53:44.213613   97516 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0821 10:53:44.213622   97516 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0821 10:53:44.213627   97516 command_runner.go:130] > # the cgroup blockio controller.
	I0821 10:53:44.213631   97516 command_runner.go:130] > # blockio_config_file = ""
	I0821 10:53:44.213638   97516 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0821 10:53:44.213644   97516 command_runner.go:130] > # irqbalance daemon.
	I0821 10:53:44.213650   97516 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0821 10:53:44.213658   97516 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0821 10:53:44.213663   97516 command_runner.go:130] > # This option supports live configuration reload.
	I0821 10:53:44.213669   97516 command_runner.go:130] > # rdt_config_file = ""
	I0821 10:53:44.213675   97516 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0821 10:53:44.213681   97516 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0821 10:53:44.213687   97516 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0821 10:53:44.213694   97516 command_runner.go:130] > # separate_pull_cgroup = ""
	I0821 10:53:44.213700   97516 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0821 10:53:44.213708   97516 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0821 10:53:44.213712   97516 command_runner.go:130] > # will be added.
	I0821 10:53:44.213718   97516 command_runner.go:130] > # default_capabilities = [
	I0821 10:53:44.213722   97516 command_runner.go:130] > # 	"CHOWN",
	I0821 10:53:44.213729   97516 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0821 10:53:44.213732   97516 command_runner.go:130] > # 	"FSETID",
	I0821 10:53:44.213736   97516 command_runner.go:130] > # 	"FOWNER",
	I0821 10:53:44.213740   97516 command_runner.go:130] > # 	"SETGID",
	I0821 10:53:44.213743   97516 command_runner.go:130] > # 	"SETUID",
	I0821 10:53:44.213747   97516 command_runner.go:130] > # 	"SETPCAP",
	I0821 10:53:44.213751   97516 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0821 10:53:44.213756   97516 command_runner.go:130] > # 	"KILL",
	I0821 10:53:44.213760   97516 command_runner.go:130] > # ]
	I0821 10:53:44.213770   97516 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0821 10:53:44.213778   97516 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0821 10:53:44.213785   97516 command_runner.go:130] > # add_inheritable_capabilities = true
	I0821 10:53:44.213790   97516 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0821 10:53:44.213798   97516 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0821 10:53:44.213802   97516 command_runner.go:130] > # default_sysctls = [
	I0821 10:53:44.213805   97516 command_runner.go:130] > # ]
	I0821 10:53:44.213809   97516 command_runner.go:130] > # List of devices on the host that a
	I0821 10:53:44.213815   97516 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0821 10:53:44.213818   97516 command_runner.go:130] > # allowed_devices = [
	I0821 10:53:44.213822   97516 command_runner.go:130] > # 	"/dev/fuse",
	I0821 10:53:44.213826   97516 command_runner.go:130] > # ]
	I0821 10:53:44.213830   97516 command_runner.go:130] > # List of additional devices. specified as
	I0821 10:53:44.213883   97516 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0821 10:53:44.213896   97516 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0821 10:53:44.213909   97516 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0821 10:53:44.213916   97516 command_runner.go:130] > # additional_devices = [
	I0821 10:53:44.213920   97516 command_runner.go:130] > # ]
	I0821 10:53:44.213925   97516 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0821 10:53:44.213932   97516 command_runner.go:130] > # cdi_spec_dirs = [
	I0821 10:53:44.213935   97516 command_runner.go:130] > # 	"/etc/cdi",
	I0821 10:53:44.213939   97516 command_runner.go:130] > # 	"/var/run/cdi",
	I0821 10:53:44.213944   97516 command_runner.go:130] > # ]
	I0821 10:53:44.213950   97516 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0821 10:53:44.213959   97516 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0821 10:53:44.213963   97516 command_runner.go:130] > # Defaults to false.
	I0821 10:53:44.213974   97516 command_runner.go:130] > # device_ownership_from_security_context = false
	I0821 10:53:44.213988   97516 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0821 10:53:44.214001   97516 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0821 10:53:44.214008   97516 command_runner.go:130] > # hooks_dir = [
	I0821 10:53:44.214013   97516 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0821 10:53:44.214019   97516 command_runner.go:130] > # ]
	I0821 10:53:44.214025   97516 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0821 10:53:44.214033   97516 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0821 10:53:44.214038   97516 command_runner.go:130] > # its default mounts from the following two files:
	I0821 10:53:44.214044   97516 command_runner.go:130] > #
	I0821 10:53:44.214050   97516 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0821 10:53:44.214059   97516 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0821 10:53:44.214064   97516 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0821 10:53:44.214070   97516 command_runner.go:130] > #
	I0821 10:53:44.214076   97516 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0821 10:53:44.214084   97516 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0821 10:53:44.214090   97516 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0821 10:53:44.214097   97516 command_runner.go:130] > #      only add mounts it finds in this file.
	I0821 10:53:44.214101   97516 command_runner.go:130] > #
	I0821 10:53:44.214108   97516 command_runner.go:130] > # default_mounts_file = ""
	I0821 10:53:44.214113   97516 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0821 10:53:44.214121   97516 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0821 10:53:44.214125   97516 command_runner.go:130] > # pids_limit = 0
	I0821 10:53:44.214134   97516 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0821 10:53:44.214140   97516 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0821 10:53:44.214148   97516 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0821 10:53:44.214156   97516 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0821 10:53:44.214162   97516 command_runner.go:130] > # log_size_max = -1
	I0821 10:53:44.214168   97516 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0821 10:53:44.214175   97516 command_runner.go:130] > # log_to_journald = false
	I0821 10:53:44.214181   97516 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0821 10:53:44.214189   97516 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0821 10:53:44.214194   97516 command_runner.go:130] > # Path to directory for container attach sockets.
	I0821 10:53:44.214201   97516 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0821 10:53:44.214207   97516 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0821 10:53:44.214211   97516 command_runner.go:130] > # bind_mount_prefix = ""
	I0821 10:53:44.214217   97516 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0821 10:53:44.214223   97516 command_runner.go:130] > # read_only = false
	I0821 10:53:44.214229   97516 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0821 10:53:44.214238   97516 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0821 10:53:44.214242   97516 command_runner.go:130] > # live configuration reload.
	I0821 10:53:44.214248   97516 command_runner.go:130] > # log_level = "info"
	I0821 10:53:44.214253   97516 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0821 10:53:44.214260   97516 command_runner.go:130] > # This option supports live configuration reload.
	I0821 10:53:44.214267   97516 command_runner.go:130] > # log_filter = ""
	I0821 10:53:44.214275   97516 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0821 10:53:44.214282   97516 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0821 10:53:44.214288   97516 command_runner.go:130] > # separated by comma.
	I0821 10:53:44.214292   97516 command_runner.go:130] > # uid_mappings = ""
	I0821 10:53:44.214297   97516 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0821 10:53:44.214305   97516 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0821 10:53:44.214310   97516 command_runner.go:130] > # separated by comma.
	I0821 10:53:44.214316   97516 command_runner.go:130] > # gid_mappings = ""
	I0821 10:53:44.214322   97516 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0821 10:53:44.214330   97516 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0821 10:53:44.214336   97516 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0821 10:53:44.214343   97516 command_runner.go:130] > # minimum_mappable_uid = -1
	I0821 10:53:44.214349   97516 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0821 10:53:44.214357   97516 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0821 10:53:44.214363   97516 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0821 10:53:44.214370   97516 command_runner.go:130] > # minimum_mappable_gid = -1
	I0821 10:53:44.214382   97516 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0821 10:53:44.214388   97516 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0821 10:53:44.214394   97516 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0821 10:53:44.214398   97516 command_runner.go:130] > # ctr_stop_timeout = 30
	I0821 10:53:44.214406   97516 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0821 10:53:44.214429   97516 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0821 10:53:44.214440   97516 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0821 10:53:44.214451   97516 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0821 10:53:44.214459   97516 command_runner.go:130] > # drop_infra_ctr = true
	I0821 10:53:44.214466   97516 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0821 10:53:44.214473   97516 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0821 10:53:44.214480   97516 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0821 10:53:44.214487   97516 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0821 10:53:44.214493   97516 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0821 10:53:44.214498   97516 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0821 10:53:44.214504   97516 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0821 10:53:44.214511   97516 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0821 10:53:44.214517   97516 command_runner.go:130] > # pinns_path = ""
	I0821 10:53:44.214528   97516 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0821 10:53:44.214542   97516 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0821 10:53:44.214554   97516 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0821 10:53:44.214562   97516 command_runner.go:130] > # default_runtime = "runc"
	I0821 10:53:44.214567   97516 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0821 10:53:44.214576   97516 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0821 10:53:44.214585   97516 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0821 10:53:44.214592   97516 command_runner.go:130] > # creation as a file is not desired either.
	I0821 10:53:44.214600   97516 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0821 10:53:44.214607   97516 command_runner.go:130] > # the hostname is being managed dynamically.
	I0821 10:53:44.214612   97516 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0821 10:53:44.214615   97516 command_runner.go:130] > # ]
	I0821 10:53:44.214621   97516 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0821 10:53:44.214630   97516 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0821 10:53:44.214636   97516 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0821 10:53:44.214645   97516 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0821 10:53:44.214648   97516 command_runner.go:130] > #
	I0821 10:53:44.214653   97516 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0821 10:53:44.214660   97516 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0821 10:53:44.214664   97516 command_runner.go:130] > #  runtime_type = "oci"
	I0821 10:53:44.214671   97516 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0821 10:53:44.214676   97516 command_runner.go:130] > #  privileged_without_host_devices = false
	I0821 10:53:44.214682   97516 command_runner.go:130] > #  allowed_annotations = []
	I0821 10:53:44.214686   97516 command_runner.go:130] > # Where:
	I0821 10:53:44.214691   97516 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0821 10:53:44.214699   97516 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0821 10:53:44.214705   97516 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0821 10:53:44.214714   97516 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0821 10:53:44.214718   97516 command_runner.go:130] > #   in $PATH.
	I0821 10:53:44.214726   97516 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0821 10:53:44.214731   97516 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0821 10:53:44.214739   97516 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0821 10:53:44.214743   97516 command_runner.go:130] > #   state.
	I0821 10:53:44.214751   97516 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0821 10:53:44.214757   97516 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0821 10:53:44.214765   97516 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0821 10:53:44.214770   97516 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0821 10:53:44.214778   97516 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0821 10:53:44.214784   97516 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0821 10:53:44.214790   97516 command_runner.go:130] > #   The currently recognized values are:
	I0821 10:53:44.214797   97516 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0821 10:53:44.214803   97516 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0821 10:53:44.214809   97516 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0821 10:53:44.214817   97516 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0821 10:53:44.214824   97516 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0821 10:53:44.214833   97516 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0821 10:53:44.214839   97516 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0821 10:53:44.214848   97516 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0821 10:53:44.214852   97516 command_runner.go:130] > #   should be moved to the container's cgroup
	I0821 10:53:44.214859   97516 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0821 10:53:44.214864   97516 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0821 10:53:44.214870   97516 command_runner.go:130] > runtime_type = "oci"
	I0821 10:53:44.214874   97516 command_runner.go:130] > runtime_root = "/run/runc"
	I0821 10:53:44.214877   97516 command_runner.go:130] > runtime_config_path = ""
	I0821 10:53:44.214881   97516 command_runner.go:130] > monitor_path = ""
	I0821 10:53:44.214886   97516 command_runner.go:130] > monitor_cgroup = ""
	I0821 10:53:44.214889   97516 command_runner.go:130] > monitor_exec_cgroup = ""
	I0821 10:53:44.214932   97516 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0821 10:53:44.214942   97516 command_runner.go:130] > # running containers
	I0821 10:53:44.214948   97516 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0821 10:53:44.214957   97516 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0821 10:53:44.214965   97516 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0821 10:53:44.214973   97516 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0821 10:53:44.214978   97516 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0821 10:53:44.214983   97516 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0821 10:53:44.214990   97516 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0821 10:53:44.214994   97516 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0821 10:53:44.214999   97516 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0821 10:53:44.215008   97516 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0821 10:53:44.215019   97516 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0821 10:53:44.215032   97516 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0821 10:53:44.215045   97516 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0821 10:53:44.215060   97516 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0821 10:53:44.215074   97516 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0821 10:53:44.215084   97516 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0821 10:53:44.215103   97516 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0821 10:53:44.215120   97516 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0821 10:53:44.215130   97516 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0821 10:53:44.215141   97516 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0821 10:53:44.215149   97516 command_runner.go:130] > # Example:
	I0821 10:53:44.215153   97516 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0821 10:53:44.215159   97516 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0821 10:53:44.215164   97516 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0821 10:53:44.215172   97516 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0821 10:53:44.215176   97516 command_runner.go:130] > # cpuset = 0
	I0821 10:53:44.215180   97516 command_runner.go:130] > # cpushares = "0-1"
	I0821 10:53:44.215185   97516 command_runner.go:130] > # Where:
	I0821 10:53:44.215190   97516 command_runner.go:130] > # The workload name is workload-type.
	I0821 10:53:44.215199   97516 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0821 10:53:44.215208   97516 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0821 10:53:44.215221   97516 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0821 10:53:44.215238   97516 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0821 10:53:44.215247   97516 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0821 10:53:44.215250   97516 command_runner.go:130] > # 
	I0821 10:53:44.215260   97516 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0821 10:53:44.215263   97516 command_runner.go:130] > #
	I0821 10:53:44.215270   97516 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0821 10:53:44.215278   97516 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0821 10:53:44.215284   97516 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0821 10:53:44.215292   97516 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0821 10:53:44.215298   97516 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0821 10:53:44.215304   97516 command_runner.go:130] > [crio.image]
	I0821 10:53:44.215309   97516 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0821 10:53:44.215316   97516 command_runner.go:130] > # default_transport = "docker://"
	I0821 10:53:44.215322   97516 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0821 10:53:44.215330   97516 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0821 10:53:44.215335   97516 command_runner.go:130] > # global_auth_file = ""
	I0821 10:53:44.215340   97516 command_runner.go:130] > # The image used to instantiate infra containers.
	I0821 10:53:44.215345   97516 command_runner.go:130] > # This option supports live configuration reload.
	I0821 10:53:44.215369   97516 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0821 10:53:44.215386   97516 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0821 10:53:44.215400   97516 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0821 10:53:44.215412   97516 command_runner.go:130] > # This option supports live configuration reload.
	I0821 10:53:44.215421   97516 command_runner.go:130] > # pause_image_auth_file = ""
	I0821 10:53:44.215427   97516 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0821 10:53:44.215435   97516 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0821 10:53:44.215441   97516 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0821 10:53:44.215448   97516 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0821 10:53:44.215453   97516 command_runner.go:130] > # pause_command = "/pause"
	I0821 10:53:44.215462   97516 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0821 10:53:44.215469   97516 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0821 10:53:44.215477   97516 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0821 10:53:44.215483   97516 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0821 10:53:44.215490   97516 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0821 10:53:44.215495   97516 command_runner.go:130] > # signature_policy = ""
	I0821 10:53:44.215508   97516 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0821 10:53:44.215516   97516 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0821 10:53:44.215520   97516 command_runner.go:130] > # changing them here.
	I0821 10:53:44.215527   97516 command_runner.go:130] > # insecure_registries = [
	I0821 10:53:44.215530   97516 command_runner.go:130] > # ]
	I0821 10:53:44.215536   97516 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0821 10:53:44.215544   97516 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0821 10:53:44.215548   97516 command_runner.go:130] > # image_volumes = "mkdir"
	I0821 10:53:44.215553   97516 command_runner.go:130] > # Temporary directory to use for storing big files
	I0821 10:53:44.215558   97516 command_runner.go:130] > # big_files_temporary_dir = ""
	I0821 10:53:44.215564   97516 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0821 10:53:44.215570   97516 command_runner.go:130] > # CNI plugins.
	I0821 10:53:44.215574   97516 command_runner.go:130] > [crio.network]
	I0821 10:53:44.215582   97516 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0821 10:53:44.215587   97516 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0821 10:53:44.215594   97516 command_runner.go:130] > # cni_default_network = ""
	I0821 10:53:44.215599   97516 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0821 10:53:44.215606   97516 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0821 10:53:44.215611   97516 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0821 10:53:44.215618   97516 command_runner.go:130] > # plugin_dirs = [
	I0821 10:53:44.215622   97516 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0821 10:53:44.215627   97516 command_runner.go:130] > # ]
	I0821 10:53:44.215645   97516 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0821 10:53:44.215652   97516 command_runner.go:130] > [crio.metrics]
	I0821 10:53:44.215656   97516 command_runner.go:130] > # Globally enable or disable metrics support.
	I0821 10:53:44.215661   97516 command_runner.go:130] > # enable_metrics = false
	I0821 10:53:44.215668   97516 command_runner.go:130] > # Specify enabled metrics collectors.
	I0821 10:53:44.215672   97516 command_runner.go:130] > # Per default all metrics are enabled.
	I0821 10:53:44.215680   97516 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0821 10:53:44.215686   97516 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0821 10:53:44.215694   97516 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0821 10:53:44.215698   97516 command_runner.go:130] > # metrics_collectors = [
	I0821 10:53:44.215704   97516 command_runner.go:130] > # 	"operations",
	I0821 10:53:44.215708   97516 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0821 10:53:44.215713   97516 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0821 10:53:44.215720   97516 command_runner.go:130] > # 	"operations_errors",
	I0821 10:53:44.215724   97516 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0821 10:53:44.215731   97516 command_runner.go:130] > # 	"image_pulls_by_name",
	I0821 10:53:44.215736   97516 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0821 10:53:44.215742   97516 command_runner.go:130] > # 	"image_pulls_failures",
	I0821 10:53:44.215746   97516 command_runner.go:130] > # 	"image_pulls_successes",
	I0821 10:53:44.215753   97516 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0821 10:53:44.215757   97516 command_runner.go:130] > # 	"image_layer_reuse",
	I0821 10:53:44.215761   97516 command_runner.go:130] > # 	"containers_oom_total",
	I0821 10:53:44.215765   97516 command_runner.go:130] > # 	"containers_oom",
	I0821 10:53:44.215769   97516 command_runner.go:130] > # 	"processes_defunct",
	I0821 10:53:44.215773   97516 command_runner.go:130] > # 	"operations_total",
	I0821 10:53:44.215777   97516 command_runner.go:130] > # 	"operations_latency_seconds",
	I0821 10:53:44.215784   97516 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0821 10:53:44.215788   97516 command_runner.go:130] > # 	"operations_errors_total",
	I0821 10:53:44.215792   97516 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0821 10:53:44.215799   97516 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0821 10:53:44.215803   97516 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0821 10:53:44.215808   97516 command_runner.go:130] > # 	"image_pulls_success_total",
	I0821 10:53:44.215814   97516 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0821 10:53:44.215818   97516 command_runner.go:130] > # 	"containers_oom_count_total",
	I0821 10:53:44.215824   97516 command_runner.go:130] > # ]
	I0821 10:53:44.215828   97516 command_runner.go:130] > # The port on which the metrics server will listen.
	I0821 10:53:44.215836   97516 command_runner.go:130] > # metrics_port = 9090
	I0821 10:53:44.215841   97516 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0821 10:53:44.215847   97516 command_runner.go:130] > # metrics_socket = ""
	I0821 10:53:44.215852   97516 command_runner.go:130] > # The certificate for the secure metrics server.
	I0821 10:53:44.215858   97516 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0821 10:53:44.215864   97516 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0821 10:53:44.215871   97516 command_runner.go:130] > # certificate on any modification event.
	I0821 10:53:44.215875   97516 command_runner.go:130] > # metrics_cert = ""
	I0821 10:53:44.215880   97516 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0821 10:53:44.215887   97516 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0821 10:53:44.215891   97516 command_runner.go:130] > # metrics_key = ""
	I0821 10:53:44.215899   97516 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0821 10:53:44.215903   97516 command_runner.go:130] > [crio.tracing]
	I0821 10:53:44.215910   97516 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0821 10:53:44.215914   97516 command_runner.go:130] > # enable_tracing = false
	I0821 10:53:44.215922   97516 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0821 10:53:44.215926   97516 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0821 10:53:44.215934   97516 command_runner.go:130] > # Number of samples to collect per million spans.
	I0821 10:53:44.215938   97516 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0821 10:53:44.215946   97516 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0821 10:53:44.215950   97516 command_runner.go:130] > [crio.stats]
	I0821 10:53:44.215956   97516 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0821 10:53:44.215962   97516 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0821 10:53:44.215967   97516 command_runner.go:130] > # stats_collection_period = 0
	I0821 10:53:44.216000   97516 command_runner.go:130] ! time="2023-08-21 10:53:44.209322131Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0821 10:53:44.216013   97516 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0821 10:53:44.216065   97516 cni.go:84] Creating CNI manager for ""
	I0821 10:53:44.216073   97516 cni.go:136] 2 nodes found, recommending kindnet
	I0821 10:53:44.216088   97516 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0821 10:53:44.216108   97516 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-200985 NodeName:multinode-200985-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0821 10:53:44.216212   97516 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-200985-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0821 10:53:44.216262   97516 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-200985-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:multinode-200985 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0821 10:53:44.216305   97516 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0821 10:53:44.223453   97516 command_runner.go:130] > kubeadm
	I0821 10:53:44.223470   97516 command_runner.go:130] > kubectl
	I0821 10:53:44.223477   97516 command_runner.go:130] > kubelet
	I0821 10:53:44.224076   97516 binaries.go:44] Found k8s binaries, skipping transfer
	I0821 10:53:44.224123   97516 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0821 10:53:44.231735   97516 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0821 10:53:44.246521   97516 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0821 10:53:44.262080   97516 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0821 10:53:44.265259   97516 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0821 10:53:44.274651   97516 host.go:66] Checking if "multinode-200985" exists ...
	I0821 10:53:44.274893   97516 config.go:182] Loaded profile config "multinode-200985": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0821 10:53:44.274887   97516 start.go:301] JoinCluster: &{Name:multinode-200985 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-200985 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 10:53:44.274995   97516 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0821 10:53:44.275043   97516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-200985
	I0821 10:53:44.291272   97516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/multinode-200985/id_rsa Username:docker}
	I0821 10:53:44.433395   97516 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 41pf21.unso0eaaaen60rd1 --discovery-token-ca-cert-hash sha256:a6ae141b3a3795878aa14999e04688399a9a305fa66151b732d0ee2f32cf9691 
	I0821 10:53:44.433456   97516 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0821 10:53:44.433490   97516 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 41pf21.unso0eaaaen60rd1 --discovery-token-ca-cert-hash sha256:a6ae141b3a3795878aa14999e04688399a9a305fa66151b732d0ee2f32cf9691 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-200985-m02"
	I0821 10:53:44.465423   97516 command_runner.go:130] > [preflight] Running pre-flight checks
	I0821 10:53:44.492303   97516 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0821 10:53:44.492322   97516 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1039-gcp
	I0821 10:53:44.492327   97516 command_runner.go:130] > OS: Linux
	I0821 10:53:44.492333   97516 command_runner.go:130] > CGROUPS_CPU: enabled
	I0821 10:53:44.492339   97516 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0821 10:53:44.492343   97516 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0821 10:53:44.492352   97516 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0821 10:53:44.492360   97516 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0821 10:53:44.492371   97516 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0821 10:53:44.492381   97516 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0821 10:53:44.492392   97516 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0821 10:53:44.492405   97516 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0821 10:53:44.565755   97516 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0821 10:53:44.565787   97516 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0821 10:53:44.588239   97516 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0821 10:53:44.588270   97516 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0821 10:53:44.588281   97516 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0821 10:53:44.667515   97516 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0821 10:53:46.680838   97516 command_runner.go:130] > This node has joined the cluster:
	I0821 10:53:46.680859   97516 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0821 10:53:46.680866   97516 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0821 10:53:46.680872   97516 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0821 10:53:46.684107   97516 command_runner.go:130] ! W0821 10:53:44.464996    1108 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0821 10:53:46.684145   97516 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1039-gcp\n", err: exit status 1
	I0821 10:53:46.684160   97516 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0821 10:53:46.684187   97516 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 41pf21.unso0eaaaen60rd1 --discovery-token-ca-cert-hash sha256:a6ae141b3a3795878aa14999e04688399a9a305fa66151b732d0ee2f32cf9691 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-200985-m02": (2.250680441s)
	I0821 10:53:46.684212   97516 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0821 10:53:46.832014   97516 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0821 10:53:46.832044   97516 start.go:303] JoinCluster complete in 2.55715715s
	I0821 10:53:46.832056   97516 cni.go:84] Creating CNI manager for ""
	I0821 10:53:46.832062   97516 cni.go:136] 2 nodes found, recommending kindnet
	I0821 10:53:46.832111   97516 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0821 10:53:46.835557   97516 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0821 10:53:46.835586   97516 command_runner.go:130] >   Size: 3955775   	Blocks: 7728       IO Block: 4096   regular file
	I0821 10:53:46.835596   97516 command_runner.go:130] > Device: 34h/52d	Inode: 543804      Links: 1
	I0821 10:53:46.835607   97516 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0821 10:53:46.835617   97516 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I0821 10:53:46.835629   97516 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I0821 10:53:46.835638   97516 command_runner.go:130] > Change: 2023-08-21 10:33:51.094069544 +0000
	I0821 10:53:46.835649   97516 command_runner.go:130] >  Birth: 2023-08-21 10:33:51.070067242 +0000
	I0821 10:53:46.835709   97516 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.27.4/kubectl ...
	I0821 10:53:46.835723   97516 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0821 10:53:46.851551   97516 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0821 10:53:47.089051   97516 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0821 10:53:47.092511   97516 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0821 10:53:47.094718   97516 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0821 10:53:47.105617   97516 command_runner.go:130] > daemonset.apps/kindnet configured
	I0821 10:53:47.110046   97516 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17102-5717/kubeconfig
	I0821 10:53:47.110411   97516 kapi.go:59] client config for multinode-200985: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/client.crt", KeyFile:"/home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/client.key", CAFile:"/home/jenkins/minikube-integration/17102-5717/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d61e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0821 10:53:47.110790   97516 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0821 10:53:47.110805   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:47.110812   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:47.110821   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:47.112798   97516 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0821 10:53:47.112817   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:47.112825   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:47.112831   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:47.112836   97516 round_trippers.go:580]     Content-Length: 291
	I0821 10:53:47.112842   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:47 GMT
	I0821 10:53:47.112850   97516 round_trippers.go:580]     Audit-Id: 8b69d50f-724c-4439-ae19-21018bfd9697
	I0821 10:53:47.112856   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:47.112868   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:47.112898   97516 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a051514f-f439-40a8-b010-41566895539b","resourceVersion":"411","creationTimestamp":"2023-08-21T10:52:47Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0821 10:53:47.112997   97516 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-200985" context rescaled to 1 replicas
	I0821 10:53:47.113026   97516 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0821 10:53:47.115789   97516 out.go:177] * Verifying Kubernetes components...
	I0821 10:53:47.117136   97516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0821 10:53:47.127734   97516 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17102-5717/kubeconfig
	I0821 10:53:47.128026   97516 kapi.go:59] client config for multinode-200985: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/client.crt", KeyFile:"/home/jenkins/minikube-integration/17102-5717/.minikube/profiles/multinode-200985/client.key", CAFile:"/home/jenkins/minikube-integration/17102-5717/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d61e00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0821 10:53:47.128372   97516 node_ready.go:35] waiting up to 6m0s for node "multinode-200985-m02" to be "Ready" ...
	I0821 10:53:47.128460   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985-m02
	I0821 10:53:47.128471   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:47.128482   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:47.128495   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:47.130630   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:47.130649   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:47.130659   97516 round_trippers.go:580]     Audit-Id: a6b2d1bd-d60a-41e3-834e-01b46da255da
	I0821 10:53:47.130667   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:47.130678   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:47.130691   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:47.130699   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:47.130707   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:47 GMT
	I0821 10:53:47.130814   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985-m02","uid":"fc4ff00a-a1ee-4e2a-a3f5-a459ad19cc6f","resourceVersion":"446","creationTimestamp":"2023-08-21T10:53:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I0821 10:53:47.131134   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985-m02
	I0821 10:53:47.131148   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:47.131155   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:47.131161   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:47.133035   97516 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0821 10:53:47.133053   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:47.133063   97516 round_trippers.go:580]     Audit-Id: 7d46ab05-dcee-40be-8e59-01bc5c55de84
	I0821 10:53:47.133070   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:47.133079   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:47.133088   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:47.133103   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:47.133111   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:47 GMT
	I0821 10:53:47.133230   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985-m02","uid":"fc4ff00a-a1ee-4e2a-a3f5-a459ad19cc6f","resourceVersion":"446","creationTimestamp":"2023-08-21T10:53:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I0821 10:53:47.634367   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985-m02
	I0821 10:53:47.634388   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:47.634399   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:47.634408   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:47.636654   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:47.636682   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:47.636698   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:47.636708   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:47.636721   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:47 GMT
	I0821 10:53:47.636730   97516 round_trippers.go:580]     Audit-Id: 94560ca4-a0ca-4b66-b818-d7bd42dddf86
	I0821 10:53:47.636742   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:47.636754   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:47.636909   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985-m02","uid":"fc4ff00a-a1ee-4e2a-a3f5-a459ad19cc6f","resourceVersion":"446","creationTimestamp":"2023-08-21T10:53:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I0821 10:53:48.134464   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985-m02
	I0821 10:53:48.134485   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:48.134494   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:48.134500   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:48.136788   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:48.136807   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:48.136814   97516 round_trippers.go:580]     Audit-Id: ed40bde0-440a-43ea-855a-7b128027fc12
	I0821 10:53:48.136820   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:48.136825   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:48.136831   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:48.136837   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:48.136842   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:48 GMT
	I0821 10:53:48.136953   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985-m02","uid":"fc4ff00a-a1ee-4e2a-a3f5-a459ad19cc6f","resourceVersion":"446","creationTimestamp":"2023-08-21T10:53:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I0821 10:53:48.634750   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985-m02
	I0821 10:53:48.634770   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:48.634779   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:48.634785   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:48.637057   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:48.637083   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:48.637094   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:48.637103   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:48.637112   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:48 GMT
	I0821 10:53:48.637125   97516 round_trippers.go:580]     Audit-Id: 36a1d6dc-2bcb-4232-a778-6bd5e1e8351f
	I0821 10:53:48.637136   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:48.637144   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:48.637236   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985-m02","uid":"fc4ff00a-a1ee-4e2a-a3f5-a459ad19cc6f","resourceVersion":"446","creationTimestamp":"2023-08-21T10:53:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I0821 10:53:49.133792   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985-m02
	I0821 10:53:49.133814   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:49.133822   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:49.133834   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:49.136141   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:49.136167   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:49.136178   97516 round_trippers.go:580]     Audit-Id: 987621b4-bcbd-4e75-8610-4cbb456e3534
	I0821 10:53:49.136188   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:49.136197   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:49.136207   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:49.136218   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:49.136225   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:49 GMT
	I0821 10:53:49.136356   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985-m02","uid":"fc4ff00a-a1ee-4e2a-a3f5-a459ad19cc6f","resourceVersion":"446","creationTimestamp":"2023-08-21T10:53:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I0821 10:53:49.136620   97516 node_ready.go:58] node "multinode-200985-m02" has status "Ready":"False"
	I0821 10:53:49.633823   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985-m02
	I0821 10:53:49.633845   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:49.633853   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:49.633860   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:49.636079   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:49.636098   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:49.636105   97516 round_trippers.go:580]     Audit-Id: 7762a61e-afe5-4dcf-81a9-2d96d4b6786a
	I0821 10:53:49.636111   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:49.636116   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:49.636121   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:49.636126   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:49.636132   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:49 GMT
	I0821 10:53:49.636245   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985-m02","uid":"fc4ff00a-a1ee-4e2a-a3f5-a459ad19cc6f","resourceVersion":"446","creationTimestamp":"2023-08-21T10:53:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I0821 10:53:50.133863   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985-m02
	I0821 10:53:50.133884   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:50.133892   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:50.133898   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:50.136179   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:50.136198   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:50.136208   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:50.136217   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:50.136227   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:50.136237   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:50 GMT
	I0821 10:53:50.136249   97516 round_trippers.go:580]     Audit-Id: aec65263-d394-4464-81cb-1e1c3e339e89
	I0821 10:53:50.136261   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:50.136400   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985-m02","uid":"fc4ff00a-a1ee-4e2a-a3f5-a459ad19cc6f","resourceVersion":"446","creationTimestamp":"2023-08-21T10:53:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I0821 10:53:50.633966   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985-m02
	I0821 10:53:50.634026   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:50.634053   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:50.634064   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:50.636749   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:50.636774   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:50.636785   97516 round_trippers.go:580]     Audit-Id: 9a4af398-6455-4603-8440-06c7647bc769
	I0821 10:53:50.636796   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:50.636808   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:50.636821   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:50.636834   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:50.636847   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:50 GMT
	I0821 10:53:50.636957   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985-m02","uid":"fc4ff00a-a1ee-4e2a-a3f5-a459ad19cc6f","resourceVersion":"464","creationTimestamp":"2023-08-21T10:53:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0821 10:53:51.134589   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985-m02
	I0821 10:53:51.134612   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:51.134624   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:51.134632   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:51.136997   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:51.137020   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:51.137027   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:51 GMT
	I0821 10:53:51.137034   97516 round_trippers.go:580]     Audit-Id: c42232fb-5452-447a-add8-439763b3b8b4
	I0821 10:53:51.137040   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:51.137046   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:51.137051   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:51.137062   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:51.137191   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985-m02","uid":"fc4ff00a-a1ee-4e2a-a3f5-a459ad19cc6f","resourceVersion":"464","creationTimestamp":"2023-08-21T10:53:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0821 10:53:51.137573   97516 node_ready.go:58] node "multinode-200985-m02" has status "Ready":"False"
	I0821 10:53:51.633960   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985-m02
	I0821 10:53:51.633983   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:51.633991   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:51.633997   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:51.636325   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:51.636353   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:51.636367   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:51.636377   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:51.636386   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:51.636396   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:51 GMT
	I0821 10:53:51.636408   97516 round_trippers.go:580]     Audit-Id: ee2f4a48-002a-44f5-828a-34c49b90f14f
	I0821 10:53:51.636421   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:51.636544   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985-m02","uid":"fc4ff00a-a1ee-4e2a-a3f5-a459ad19cc6f","resourceVersion":"464","creationTimestamp":"2023-08-21T10:53:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0821 10:53:52.134059   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985-m02
	I0821 10:53:52.134080   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:52.134090   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:52.134098   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:52.136457   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:52.136476   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:52.136483   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:52.136490   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:52.136498   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:52.136507   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:52.136517   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:52 GMT
	I0821 10:53:52.136527   97516 round_trippers.go:580]     Audit-Id: b3209e5f-2f96-4c35-99cf-7085f8f6df5f
	I0821 10:53:52.136651   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985-m02","uid":"fc4ff00a-a1ee-4e2a-a3f5-a459ad19cc6f","resourceVersion":"464","creationTimestamp":"2023-08-21T10:53:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0821 10:53:52.634139   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985-m02
	I0821 10:53:52.634163   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:52.634174   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:52.634182   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:52.636439   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:52.636457   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:52.636463   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:52.636470   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:52 GMT
	I0821 10:53:52.636476   97516 round_trippers.go:580]     Audit-Id: 4ed608a9-6b30-430b-be70-82a69aa66237
	I0821 10:53:52.636484   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:52.636492   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:52.636500   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:52.636646   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985-m02","uid":"fc4ff00a-a1ee-4e2a-a3f5-a459ad19cc6f","resourceVersion":"464","creationTimestamp":"2023-08-21T10:53:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0821 10:53:53.134229   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985-m02
	I0821 10:53:53.134252   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:53.134260   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:53.134267   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:53.136318   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:53.136343   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:53.136353   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:53.136361   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:53.136371   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:53 GMT
	I0821 10:53:53.136381   97516 round_trippers.go:580]     Audit-Id: dd31282a-e97b-4ad4-a61a-225dff5ba6e2
	I0821 10:53:53.136390   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:53.136404   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:53.136516   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985-m02","uid":"fc4ff00a-a1ee-4e2a-a3f5-a459ad19cc6f","resourceVersion":"464","creationTimestamp":"2023-08-21T10:53:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0821 10:53:53.634244   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985-m02
	I0821 10:53:53.634269   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:53.634277   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:53.634283   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:53.636668   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:53.636691   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:53.636698   97516 round_trippers.go:580]     Audit-Id: 1824c41f-162e-4eec-b338-18c2848858fe
	I0821 10:53:53.636708   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:53.636716   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:53.636725   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:53.636733   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:53.636748   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:53 GMT
	I0821 10:53:53.636867   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985-m02","uid":"fc4ff00a-a1ee-4e2a-a3f5-a459ad19cc6f","resourceVersion":"464","creationTimestamp":"2023-08-21T10:53:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0821 10:53:53.637187   97516 node_ready.go:58] node "multinode-200985-m02" has status "Ready":"False"
	I0821 10:53:54.134529   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985-m02
	I0821 10:53:54.134556   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:54.134568   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:54.134577   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:54.136713   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:54.136732   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:54.136739   97516 round_trippers.go:580]     Audit-Id: de1f16bd-3e47-4dc9-baca-e0a6412c3c78
	I0821 10:53:54.136745   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:54.136754   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:54.136762   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:54.136773   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:54.136785   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:54 GMT
	I0821 10:53:54.136903   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985-m02","uid":"fc4ff00a-a1ee-4e2a-a3f5-a459ad19cc6f","resourceVersion":"464","creationTimestamp":"2023-08-21T10:53:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0821 10:53:54.634483   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985-m02
	I0821 10:53:54.634502   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:54.634510   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:54.634516   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:54.636944   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:54.636964   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:54.636971   97516 round_trippers.go:580]     Audit-Id: 92ed5568-b6fe-446f-856c-558977ac4327
	I0821 10:53:54.636979   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:54.636985   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:54.636992   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:54.637001   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:54.637011   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:54 GMT
	I0821 10:53:54.637100   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985-m02","uid":"fc4ff00a-a1ee-4e2a-a3f5-a459ad19cc6f","resourceVersion":"464","creationTimestamp":"2023-08-21T10:53:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0821 10:53:55.134695   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985-m02
	I0821 10:53:55.134716   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:55.134724   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:55.134731   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:55.137000   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:55.137022   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:55.137033   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:55.137044   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:55.137053   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:55 GMT
	I0821 10:53:55.137066   97516 round_trippers.go:580]     Audit-Id: 336fc4c6-d8b1-406e-8234-62de6d62ddb9
	I0821 10:53:55.137075   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:55.137084   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:55.137312   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985-m02","uid":"fc4ff00a-a1ee-4e2a-a3f5-a459ad19cc6f","resourceVersion":"464","creationTimestamp":"2023-08-21T10:53:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0821 10:53:55.634746   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985-m02
	I0821 10:53:55.634780   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:55.634789   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:55.634795   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:55.637115   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:55.637137   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:55.637147   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:55.637153   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:55 GMT
	I0821 10:53:55.637158   97516 round_trippers.go:580]     Audit-Id: 5f646084-b229-4858-ac1c-2c4203045879
	I0821 10:53:55.637164   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:55.637169   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:55.637177   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:55.637262   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985-m02","uid":"fc4ff00a-a1ee-4e2a-a3f5-a459ad19cc6f","resourceVersion":"464","creationTimestamp":"2023-08-21T10:53:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0821 10:53:55.637533   97516 node_ready.go:58] node "multinode-200985-m02" has status "Ready":"False"
	I0821 10:53:56.133743   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985-m02
	I0821 10:53:56.133762   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:56.133770   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:56.133783   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:56.136349   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:56.136372   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:56.136388   97516 round_trippers.go:580]     Audit-Id: cb3fc7ea-3ce5-47e3-aeb0-a031acc18b42
	I0821 10:53:56.136397   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:56.136404   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:56.136410   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:56.136418   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:56.136423   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:56 GMT
	I0821 10:53:56.136512   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985-m02","uid":"fc4ff00a-a1ee-4e2a-a3f5-a459ad19cc6f","resourceVersion":"464","creationTimestamp":"2023-08-21T10:53:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0821 10:53:56.634715   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985-m02
	I0821 10:53:56.634739   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:56.634747   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:56.634756   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:56.637043   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:56.637068   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:56.637077   97516 round_trippers.go:580]     Audit-Id: 9b2b903d-a28d-4a4a-b448-4e89ab63d4a3
	I0821 10:53:56.637085   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:56.637093   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:56.637101   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:56.637110   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:56.637122   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:56 GMT
	I0821 10:53:56.637231   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985-m02","uid":"fc4ff00a-a1ee-4e2a-a3f5-a459ad19cc6f","resourceVersion":"464","creationTimestamp":"2023-08-21T10:53:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0821 10:53:57.133764   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985-m02
	I0821 10:53:57.133786   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:57.133794   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:57.133805   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:57.135944   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:57.135969   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:57.135979   97516 round_trippers.go:580]     Audit-Id: 8a283e7a-c93d-450e-8bac-a3918d11e6dc
	I0821 10:53:57.135989   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:57.135998   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:57.136010   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:57.136019   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:57.136024   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:57 GMT
	I0821 10:53:57.136149   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985-m02","uid":"fc4ff00a-a1ee-4e2a-a3f5-a459ad19cc6f","resourceVersion":"471","creationTimestamp":"2023-08-21T10:53:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0821 10:53:57.633750   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985-m02
	I0821 10:53:57.633769   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:57.633777   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:57.633783   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:57.636028   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:57.636051   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:57.636065   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:57.636072   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:57.636078   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:57.636087   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:57 GMT
	I0821 10:53:57.636093   97516 round_trippers.go:580]     Audit-Id: a255e856-fde6-4e25-85cf-f4d9504631e0
	I0821 10:53:57.636099   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:57.636178   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985-m02","uid":"fc4ff00a-a1ee-4e2a-a3f5-a459ad19cc6f","resourceVersion":"471","creationTimestamp":"2023-08-21T10:53:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0821 10:53:58.134760   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985-m02
	I0821 10:53:58.134778   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:58.134786   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:58.134792   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:58.137061   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:58.137078   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:58.137085   97516 round_trippers.go:580]     Audit-Id: 995ee311-973c-4949-ae8b-89cc077f2bc4
	I0821 10:53:58.137092   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:58.137101   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:58.137112   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:58.137124   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:58.137133   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:58 GMT
	I0821 10:53:58.137237   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985-m02","uid":"fc4ff00a-a1ee-4e2a-a3f5-a459ad19cc6f","resourceVersion":"471","creationTimestamp":"2023-08-21T10:53:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0821 10:53:58.137557   97516 node_ready.go:58] node "multinode-200985-m02" has status "Ready":"False"
	I0821 10:53:58.634463   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985-m02
	I0821 10:53:58.634484   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:58.634492   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:58.634499   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:58.636755   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:58.636779   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:58.636790   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:58.636798   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:58.636806   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:58.636819   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:58 GMT
	I0821 10:53:58.636830   97516 round_trippers.go:580]     Audit-Id: 0db67a40-9279-43d9-83cf-3cf7cebe1840
	I0821 10:53:58.636842   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:58.636939   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985-m02","uid":"fc4ff00a-a1ee-4e2a-a3f5-a459ad19cc6f","resourceVersion":"471","creationTimestamp":"2023-08-21T10:53:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0821 10:53:59.134588   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985-m02
	I0821 10:53:59.134607   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:59.134619   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:59.134628   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:59.137190   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:59.137216   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:59.137227   97516 round_trippers.go:580]     Audit-Id: a0053f6c-9bbb-4bc2-9df2-24fdd8fd047d
	I0821 10:53:59.137237   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:59.137245   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:59.137254   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:59.137267   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:59.137279   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:59 GMT
	I0821 10:53:59.137386   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985-m02","uid":"fc4ff00a-a1ee-4e2a-a3f5-a459ad19cc6f","resourceVersion":"471","creationTimestamp":"2023-08-21T10:53:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0821 10:53:59.633949   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985-m02
	I0821 10:53:59.633969   97516 round_trippers.go:469] Request Headers:
	I0821 10:53:59.633978   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:53:59.633984   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:53:59.636160   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:53:59.636182   97516 round_trippers.go:577] Response Headers:
	I0821 10:53:59.636190   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:53:59.636196   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:53:59.636202   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:53:59 GMT
	I0821 10:53:59.636207   97516 round_trippers.go:580]     Audit-Id: 0c3843fd-74e3-42fe-8290-37bef5e48c14
	I0821 10:53:59.636212   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:53:59.636218   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:53:59.636311   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985-m02","uid":"fc4ff00a-a1ee-4e2a-a3f5-a459ad19cc6f","resourceVersion":"471","creationTimestamp":"2023-08-21T10:53:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0821 10:54:00.133921   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985-m02
	I0821 10:54:00.133942   97516 round_trippers.go:469] Request Headers:
	I0821 10:54:00.133950   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:54:00.133956   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:54:00.136181   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:54:00.136213   97516 round_trippers.go:577] Response Headers:
	I0821 10:54:00.136224   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:54:00 GMT
	I0821 10:54:00.136234   97516 round_trippers.go:580]     Audit-Id: 25961da1-719d-44cb-8be7-27ff19a24478
	I0821 10:54:00.136243   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:54:00.136256   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:54:00.136266   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:54:00.136275   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:54:00.136406   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985-m02","uid":"fc4ff00a-a1ee-4e2a-a3f5-a459ad19cc6f","resourceVersion":"471","creationTimestamp":"2023-08-21T10:53:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0821 10:54:00.633946   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985-m02
	I0821 10:54:00.633969   97516 round_trippers.go:469] Request Headers:
	I0821 10:54:00.633981   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:54:00.633989   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:54:00.636282   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:54:00.636305   97516 round_trippers.go:577] Response Headers:
	I0821 10:54:00.636315   97516 round_trippers.go:580]     Audit-Id: f4861189-0136-4cf5-b9b3-d93b2688da92
	I0821 10:54:00.636321   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:54:00.636327   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:54:00.636335   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:54:00.636343   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:54:00.636352   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:54:00 GMT
	I0821 10:54:00.636476   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985-m02","uid":"fc4ff00a-a1ee-4e2a-a3f5-a459ad19cc6f","resourceVersion":"471","creationTimestamp":"2023-08-21T10:53:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0821 10:54:00.636798   97516 node_ready.go:58] node "multinode-200985-m02" has status "Ready":"False"
	I0821 10:54:01.133941   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985-m02
	I0821 10:54:01.133963   97516 round_trippers.go:469] Request Headers:
	I0821 10:54:01.133971   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:54:01.133977   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:54:01.136409   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:54:01.136425   97516 round_trippers.go:577] Response Headers:
	I0821 10:54:01.136432   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:54:01.136438   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:54:01.136446   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:54:01.136453   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:54:01 GMT
	I0821 10:54:01.136463   97516 round_trippers.go:580]     Audit-Id: 29d284d7-d1f8-45ac-b85d-9697e041e1df
	I0821 10:54:01.136474   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:54:01.136586   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985-m02","uid":"fc4ff00a-a1ee-4e2a-a3f5-a459ad19cc6f","resourceVersion":"471","creationTimestamp":"2023-08-21T10:53:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0821 10:54:01.634482   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985-m02
	I0821 10:54:01.634504   97516 round_trippers.go:469] Request Headers:
	I0821 10:54:01.634512   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:54:01.634518   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:54:01.636827   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:54:01.636847   97516 round_trippers.go:577] Response Headers:
	I0821 10:54:01.636854   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:54:01.636860   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:54:01.636866   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:54:01 GMT
	I0821 10:54:01.636872   97516 round_trippers.go:580]     Audit-Id: 84791415-3f3e-4505-a924-3db368c5e09e
	I0821 10:54:01.636877   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:54:01.636883   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:54:01.636983   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985-m02","uid":"fc4ff00a-a1ee-4e2a-a3f5-a459ad19cc6f","resourceVersion":"471","creationTimestamp":"2023-08-21T10:53:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0821 10:54:02.134543   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985-m02
	I0821 10:54:02.134563   97516 round_trippers.go:469] Request Headers:
	I0821 10:54:02.134571   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:54:02.134576   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:54:02.136770   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:54:02.136787   97516 round_trippers.go:577] Response Headers:
	I0821 10:54:02.136794   97516 round_trippers.go:580]     Audit-Id: d596a3cc-ec1e-46cc-a63c-f77b2a25f83e
	I0821 10:54:02.136800   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:54:02.136805   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:54:02.136811   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:54:02.136820   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:54:02.136829   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:54:02 GMT
	I0821 10:54:02.136953   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985-m02","uid":"fc4ff00a-a1ee-4e2a-a3f5-a459ad19cc6f","resourceVersion":"471","creationTimestamp":"2023-08-21T10:53:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0821 10:54:02.634625   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985-m02
	I0821 10:54:02.634645   97516 round_trippers.go:469] Request Headers:
	I0821 10:54:02.634653   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:54:02.634660   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:54:02.636977   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:54:02.636995   97516 round_trippers.go:577] Response Headers:
	I0821 10:54:02.637002   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:54:02 GMT
	I0821 10:54:02.637008   97516 round_trippers.go:580]     Audit-Id: 9e7bef53-2eb3-43ad-b956-f621c25fa489
	I0821 10:54:02.637013   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:54:02.637019   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:54:02.637026   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:54:02.637035   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:54:02.637132   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985-m02","uid":"fc4ff00a-a1ee-4e2a-a3f5-a459ad19cc6f","resourceVersion":"471","creationTimestamp":"2023-08-21T10:53:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0821 10:54:02.637441   97516 node_ready.go:58] node "multinode-200985-m02" has status "Ready":"False"
	I0821 10:54:03.134734   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985-m02
	I0821 10:54:03.134753   97516 round_trippers.go:469] Request Headers:
	I0821 10:54:03.134761   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:54:03.134767   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:54:03.137366   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:54:03.137389   97516 round_trippers.go:577] Response Headers:
	I0821 10:54:03.137400   97516 round_trippers.go:580]     Audit-Id: 636ddab8-6893-4e92-9578-6d1cb3a3fbaf
	I0821 10:54:03.137412   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:54:03.137420   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:54:03.137439   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:54:03.137448   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:54:03.137460   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:54:03 GMT
	I0821 10:54:03.137575   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985-m02","uid":"fc4ff00a-a1ee-4e2a-a3f5-a459ad19cc6f","resourceVersion":"471","creationTimestamp":"2023-08-21T10:53:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0821 10:54:03.634242   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985-m02
	I0821 10:54:03.634264   97516 round_trippers.go:469] Request Headers:
	I0821 10:54:03.634275   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:54:03.634283   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:54:03.636685   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:54:03.636706   97516 round_trippers.go:577] Response Headers:
	I0821 10:54:03.636714   97516 round_trippers.go:580]     Audit-Id: 7b471f0b-6ed7-45c2-b3b0-c3768ecd3e80
	I0821 10:54:03.636719   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:54:03.636727   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:54:03.636735   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:54:03.636746   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:54:03.636756   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:54:03 GMT
	I0821 10:54:03.636873   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985-m02","uid":"fc4ff00a-a1ee-4e2a-a3f5-a459ad19cc6f","resourceVersion":"471","creationTimestamp":"2023-08-21T10:53:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5479 chars]
	I0821 10:54:04.134485   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985-m02
	I0821 10:54:04.134505   97516 round_trippers.go:469] Request Headers:
	I0821 10:54:04.134513   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:54:04.134520   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:54:04.136817   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:54:04.136835   97516 round_trippers.go:577] Response Headers:
	I0821 10:54:04.136842   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:54:04.136848   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:54:04.136853   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:54:04 GMT
	I0821 10:54:04.136858   97516 round_trippers.go:580]     Audit-Id: cb6847fe-0004-420e-bc6c-dd11e49b2054
	I0821 10:54:04.136864   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:54:04.136870   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:54:04.136993   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985-m02","uid":"fc4ff00a-a1ee-4e2a-a3f5-a459ad19cc6f","resourceVersion":"488","creationTimestamp":"2023-08-21T10:53:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5296 chars]
	I0821 10:54:04.137282   97516 node_ready.go:49] node "multinode-200985-m02" has status "Ready":"True"
	I0821 10:54:04.137294   97516 node_ready.go:38] duration metric: took 17.008903093s waiting for node "multinode-200985-m02" to be "Ready" ...
	I0821 10:54:04.137303   97516 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0821 10:54:04.137353   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0821 10:54:04.137360   97516 round_trippers.go:469] Request Headers:
	I0821 10:54:04.137367   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:54:04.137380   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:54:04.141092   97516 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0821 10:54:04.141116   97516 round_trippers.go:577] Response Headers:
	I0821 10:54:04.141127   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:54:04 GMT
	I0821 10:54:04.141140   97516 round_trippers.go:580]     Audit-Id: cbb9bdb3-14b9-440e-b026-636014829e1a
	I0821 10:54:04.141149   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:54:04.141162   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:54:04.141174   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:54:04.141184   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:54:04.141745   97516 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"490"},"items":[{"metadata":{"name":"coredns-5d78c9869d-p7wfm","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"e31fc5d6-efb4-4659-95e0-45e4b0319116","resourceVersion":"407","creationTimestamp":"2023-08-21T10:53:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"e0b7f7a8-559b-44c2-879c-54813abddce8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e0b7f7a8-559b-44c2-879c-54813abddce8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68974 chars]
	I0821 10:54:04.143927   97516 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-p7wfm" in "kube-system" namespace to be "Ready" ...
	I0821 10:54:04.143988   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-p7wfm
	I0821 10:54:04.143995   97516 round_trippers.go:469] Request Headers:
	I0821 10:54:04.144003   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:54:04.144011   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:54:04.145691   97516 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0821 10:54:04.145711   97516 round_trippers.go:577] Response Headers:
	I0821 10:54:04.145720   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:54:04 GMT
	I0821 10:54:04.145726   97516 round_trippers.go:580]     Audit-Id: fe7fdaf4-0e23-492b-9c05-24906feaa5ca
	I0821 10:54:04.145734   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:54:04.145742   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:54:04.145748   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:54:04.145756   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:54:04.145847   97516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-p7wfm","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"e31fc5d6-efb4-4659-95e0-45e4b0319116","resourceVersion":"407","creationTimestamp":"2023-08-21T10:53:01Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"e0b7f7a8-559b-44c2-879c-54813abddce8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e0b7f7a8-559b-44c2-879c-54813abddce8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0821 10:54:04.146336   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:54:04.146351   97516 round_trippers.go:469] Request Headers:
	I0821 10:54:04.146362   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:54:04.146371   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:54:04.148064   97516 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0821 10:54:04.148078   97516 round_trippers.go:577] Response Headers:
	I0821 10:54:04.148084   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:54:04.148090   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:54:04.148097   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:54:04 GMT
	I0821 10:54:04.148106   97516 round_trippers.go:580]     Audit-Id: dbce82d4-7f95-4001-95b4-91a9cb2d1201
	I0821 10:54:04.148118   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:54:04.148131   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:54:04.148249   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"388","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0821 10:54:04.148617   97516 pod_ready.go:92] pod "coredns-5d78c9869d-p7wfm" in "kube-system" namespace has status "Ready":"True"
	I0821 10:54:04.148631   97516 pod_ready.go:81] duration metric: took 4.68519ms waiting for pod "coredns-5d78c9869d-p7wfm" in "kube-system" namespace to be "Ready" ...
	I0821 10:54:04.148641   97516 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-200985" in "kube-system" namespace to be "Ready" ...
	I0821 10:54:04.148701   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-200985
	I0821 10:54:04.148710   97516 round_trippers.go:469] Request Headers:
	I0821 10:54:04.148718   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:54:04.148731   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:54:04.150385   97516 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0821 10:54:04.150398   97516 round_trippers.go:577] Response Headers:
	I0821 10:54:04.150406   97516 round_trippers.go:580]     Audit-Id: 3a3514d4-f741-4b2d-a725-39219b8488ff
	I0821 10:54:04.150415   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:54:04.150427   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:54:04.150440   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:54:04.150447   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:54:04.150455   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:54:04 GMT
	I0821 10:54:04.150557   97516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-200985","namespace":"kube-system","uid":"a157a5a3-4690-4eb4-9efd-f753499e5e11","resourceVersion":"260","creationTimestamp":"2023-08-21T10:52:47Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"9757e19a475fd7a8f263a89eaa2774b0","kubernetes.io/config.mirror":"9757e19a475fd7a8f263a89eaa2774b0","kubernetes.io/config.seen":"2023-08-21T10:52:47.336802644Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:52:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0821 10:54:04.151001   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:54:04.151016   97516 round_trippers.go:469] Request Headers:
	I0821 10:54:04.151027   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:54:04.151037   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:54:04.152815   97516 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0821 10:54:04.152829   97516 round_trippers.go:577] Response Headers:
	I0821 10:54:04.152839   97516 round_trippers.go:580]     Audit-Id: 326bef42-fea8-455d-bf78-05ecee131919
	I0821 10:54:04.152848   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:54:04.152857   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:54:04.152866   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:54:04.152875   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:54:04.152886   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:54:04 GMT
	I0821 10:54:04.152991   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"388","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0821 10:54:04.153335   97516 pod_ready.go:92] pod "etcd-multinode-200985" in "kube-system" namespace has status "Ready":"True"
	I0821 10:54:04.153349   97516 pod_ready.go:81] duration metric: took 4.697276ms waiting for pod "etcd-multinode-200985" in "kube-system" namespace to be "Ready" ...
	I0821 10:54:04.153367   97516 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-200985" in "kube-system" namespace to be "Ready" ...
	I0821 10:54:04.153419   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-200985
	I0821 10:54:04.153429   97516 round_trippers.go:469] Request Headers:
	I0821 10:54:04.153436   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:54:04.153445   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:54:04.155254   97516 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0821 10:54:04.155273   97516 round_trippers.go:577] Response Headers:
	I0821 10:54:04.155282   97516 round_trippers.go:580]     Audit-Id: 5654b04b-7de2-4159-9764-eace9cffafe9
	I0821 10:54:04.155290   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:54:04.155306   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:54:04.155320   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:54:04.155329   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:54:04.155341   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:54:04 GMT
	I0821 10:54:04.155475   97516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-200985","namespace":"kube-system","uid":"0a22f07a-55fc-443f-b684-237b16409ed9","resourceVersion":"254","creationTimestamp":"2023-08-21T10:52:47Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"8e3ebedf03aabd7965c175800d660a23","kubernetes.io/config.mirror":"8e3ebedf03aabd7965c175800d660a23","kubernetes.io/config.seen":"2023-08-21T10:52:47.336793815Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:52:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0821 10:54:04.155872   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:54:04.155886   97516 round_trippers.go:469] Request Headers:
	I0821 10:54:04.155893   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:54:04.155902   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:54:04.157422   97516 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0821 10:54:04.157436   97516 round_trippers.go:577] Response Headers:
	I0821 10:54:04.157444   97516 round_trippers.go:580]     Audit-Id: 8c8739fb-8eae-49bd-9545-a381dcb92db1
	I0821 10:54:04.157449   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:54:04.157455   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:54:04.157461   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:54:04.157469   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:54:04.157474   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:54:04 GMT
	I0821 10:54:04.157572   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"388","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0821 10:54:04.157828   97516 pod_ready.go:92] pod "kube-apiserver-multinode-200985" in "kube-system" namespace has status "Ready":"True"
	I0821 10:54:04.157840   97516 pod_ready.go:81] duration metric: took 4.465808ms waiting for pod "kube-apiserver-multinode-200985" in "kube-system" namespace to be "Ready" ...
	I0821 10:54:04.157847   97516 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-200985" in "kube-system" namespace to be "Ready" ...
	I0821 10:54:04.157895   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-200985
	I0821 10:54:04.157902   97516 round_trippers.go:469] Request Headers:
	I0821 10:54:04.157909   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:54:04.157917   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:54:04.159641   97516 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0821 10:54:04.159654   97516 round_trippers.go:577] Response Headers:
	I0821 10:54:04.159661   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:54:04.159667   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:54:04 GMT
	I0821 10:54:04.159673   97516 round_trippers.go:580]     Audit-Id: 493be3bc-9ed8-47de-b5ac-a22a0aeaf76e
	I0821 10:54:04.159681   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:54:04.159686   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:54:04.159694   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:54:04.159834   97516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-200985","namespace":"kube-system","uid":"9f23370f-54a6-415e-b146-ccd32e50df39","resourceVersion":"282","creationTimestamp":"2023-08-21T10:52:47Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"71757014aa83e6e2acd6644df67bac26","kubernetes.io/config.mirror":"71757014aa83e6e2acd6644df67bac26","kubernetes.io/config.seen":"2023-08-21T10:52:47.336799628Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:52:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0821 10:54:04.160171   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:54:04.160181   97516 round_trippers.go:469] Request Headers:
	I0821 10:54:04.160188   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:54:04.160194   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:54:04.161650   97516 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0821 10:54:04.161662   97516 round_trippers.go:577] Response Headers:
	I0821 10:54:04.161670   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:54:04 GMT
	I0821 10:54:04.161675   97516 round_trippers.go:580]     Audit-Id: f4f1e52f-5dc9-4574-bbbb-0a885e372320
	I0821 10:54:04.161682   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:54:04.161688   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:54:04.161693   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:54:04.161699   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:54:04.161795   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"388","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0821 10:54:04.162045   97516 pod_ready.go:92] pod "kube-controller-manager-multinode-200985" in "kube-system" namespace has status "Ready":"True"
	I0821 10:54:04.162057   97516 pod_ready.go:81] duration metric: took 4.203364ms waiting for pod "kube-controller-manager-multinode-200985" in "kube-system" namespace to be "Ready" ...
	I0821 10:54:04.162065   97516 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fc8dc" in "kube-system" namespace to be "Ready" ...
	I0821 10:54:04.335433   97516 request.go:629] Waited for 173.317838ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fc8dc
	I0821 10:54:04.335512   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fc8dc
	I0821 10:54:04.335524   97516 round_trippers.go:469] Request Headers:
	I0821 10:54:04.335535   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:54:04.335544   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:54:04.337979   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:54:04.337999   97516 round_trippers.go:577] Response Headers:
	I0821 10:54:04.338005   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:54:04.338011   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:54:04.338016   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:54:04 GMT
	I0821 10:54:04.338022   97516 round_trippers.go:580]     Audit-Id: 16dfa6fe-6939-4bd2-a75d-c13bb3f7b6df
	I0821 10:54:04.338027   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:54:04.338033   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:54:04.338122   97516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fc8dc","generateName":"kube-proxy-","namespace":"kube-system","uid":"ea89397d-6ff7-4659-9d2c-c311912368db","resourceVersion":"479","creationTimestamp":"2023-08-21T10:53:46Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d0c1d33f-52b4-4e5d-a101-812adc397df3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d0c1d33f-52b4-4e5d-a101-812adc397df3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0821 10:54:04.534911   97516 request.go:629] Waited for 196.361747ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-200985-m02
	I0821 10:54:04.534957   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985-m02
	I0821 10:54:04.534962   97516 round_trippers.go:469] Request Headers:
	I0821 10:54:04.534970   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:54:04.534976   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:54:04.537406   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:54:04.537741   97516 round_trippers.go:577] Response Headers:
	I0821 10:54:04.537764   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:54:04.537776   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:54:04.537786   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:54:04.537796   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:54:04.537806   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:54:04 GMT
	I0821 10:54:04.537816   97516 round_trippers.go:580]     Audit-Id: 27841d92-d801-4cab-924d-9f3657092c86
	I0821 10:54:04.537946   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985-m02","uid":"fc4ff00a-a1ee-4e2a-a3f5-a459ad19cc6f","resourceVersion":"488","creationTimestamp":"2023-08-21T10:53:46Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5296 chars]
	I0821 10:54:04.538416   97516 pod_ready.go:92] pod "kube-proxy-fc8dc" in "kube-system" namespace has status "Ready":"True"
	I0821 10:54:04.538428   97516 pod_ready.go:81] duration metric: took 376.35701ms waiting for pod "kube-proxy-fc8dc" in "kube-system" namespace to be "Ready" ...
	I0821 10:54:04.538440   97516 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hr82h" in "kube-system" namespace to be "Ready" ...
	I0821 10:54:04.735076   97516 request.go:629] Waited for 196.562397ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hr82h
	I0821 10:54:04.735147   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hr82h
	I0821 10:54:04.735155   97516 round_trippers.go:469] Request Headers:
	I0821 10:54:04.735163   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:54:04.735169   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:54:04.737428   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:54:04.737457   97516 round_trippers.go:577] Response Headers:
	I0821 10:54:04.737464   97516 round_trippers.go:580]     Audit-Id: 7054e2fe-9aa0-4a92-890d-7a41a7cc7eb6
	I0821 10:54:04.737472   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:54:04.737481   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:54:04.737489   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:54:04.737501   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:54:04.737509   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:54:04 GMT
	I0821 10:54:04.737627   97516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hr82h","generateName":"kube-proxy-","namespace":"kube-system","uid":"3c4817fa-d083-4bc4-9e1b-a98f77433293","resourceVersion":"363","creationTimestamp":"2023-08-21T10:53:01Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d0c1d33f-52b4-4e5d-a101-812adc397df3","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:53:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d0c1d33f-52b4-4e5d-a101-812adc397df3\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0821 10:54:04.935434   97516 request.go:629] Waited for 197.368856ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:54:04.935486   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:54:04.935490   97516 round_trippers.go:469] Request Headers:
	I0821 10:54:04.935498   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:54:04.935505   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:54:04.937701   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:54:04.937725   97516 round_trippers.go:577] Response Headers:
	I0821 10:54:04.937732   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:54:04.937738   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:54:04.937744   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:54:04.937749   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:54:04.937755   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:54:04 GMT
	I0821 10:54:04.937760   97516 round_trippers.go:580]     Audit-Id: 68830fe3-8667-4377-8279-11cc22f1ccfa
	I0821 10:54:04.937856   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"388","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0821 10:54:04.938164   97516 pod_ready.go:92] pod "kube-proxy-hr82h" in "kube-system" namespace has status "Ready":"True"
	I0821 10:54:04.938179   97516 pod_ready.go:81] duration metric: took 399.732701ms waiting for pod "kube-proxy-hr82h" in "kube-system" namespace to be "Ready" ...
	I0821 10:54:04.938189   97516 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-200985" in "kube-system" namespace to be "Ready" ...
	I0821 10:54:05.134539   97516 request.go:629] Waited for 196.290037ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-200985
	I0821 10:54:05.134606   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-200985
	I0821 10:54:05.134612   97516 round_trippers.go:469] Request Headers:
	I0821 10:54:05.134622   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:54:05.134634   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:54:05.136837   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:54:05.136855   97516 round_trippers.go:577] Response Headers:
	I0821 10:54:05.136864   97516 round_trippers.go:580]     Audit-Id: 19b4672f-cf4a-48d1-9823-738e9ee5c6fa
	I0821 10:54:05.136870   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:54:05.136876   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:54:05.136881   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:54:05.136887   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:54:05.136893   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:54:05 GMT
	I0821 10:54:05.136979   97516 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-200985","namespace":"kube-system","uid":"1ac1d965-22f6-4c06-b04f-7cbfab581bbd","resourceVersion":"257","creationTimestamp":"2023-08-21T10:52:47Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"775aa5ce7b376581a0fd6b5e7ef37b50","kubernetes.io/config.mirror":"775aa5ce7b376581a0fd6b5e7ef37b50","kubernetes.io/config.seen":"2023-08-21T10:52:47.336801061Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T10:52:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0821 10:54:05.334616   97516 request.go:629] Waited for 197.284566ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:54:05.334680   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-200985
	I0821 10:54:05.334687   97516 round_trippers.go:469] Request Headers:
	I0821 10:54:05.334695   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:54:05.334705   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:54:05.337060   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:54:05.337086   97516 round_trippers.go:577] Response Headers:
	I0821 10:54:05.337097   97516 round_trippers.go:580]     Audit-Id: dbef59db-4e35-46b6-b0d5-6149c9fbff78
	I0821 10:54:05.337107   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:54:05.337116   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:54:05.337125   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:54:05.337139   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:54:05.337152   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:54:05 GMT
	I0821 10:54:05.337266   97516 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"388","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T10:52:44Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0821 10:54:05.337598   97516 pod_ready.go:92] pod "kube-scheduler-multinode-200985" in "kube-system" namespace has status "Ready":"True"
	I0821 10:54:05.337611   97516 pod_ready.go:81] duration metric: took 399.416281ms waiting for pod "kube-scheduler-multinode-200985" in "kube-system" namespace to be "Ready" ...
	I0821 10:54:05.337620   97516 pod_ready.go:38] duration metric: took 1.200309108s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0821 10:54:05.337633   97516 system_svc.go:44] waiting for kubelet service to be running ....
	I0821 10:54:05.337679   97516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0821 10:54:05.347945   97516 system_svc.go:56] duration metric: took 10.304681ms WaitForService to wait for kubelet.
	I0821 10:54:05.347971   97516 kubeadm.go:581] duration metric: took 18.234916871s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0821 10:54:05.347998   97516 node_conditions.go:102] verifying NodePressure condition ...
	I0821 10:54:05.535433   97516 request.go:629] Waited for 187.34765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0821 10:54:05.535492   97516 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0821 10:54:05.535499   97516 round_trippers.go:469] Request Headers:
	I0821 10:54:05.535510   97516 round_trippers.go:473]     Accept: application/json, */*
	I0821 10:54:05.535520   97516 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0821 10:54:05.537804   97516 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 10:54:05.537832   97516 round_trippers.go:577] Response Headers:
	I0821 10:54:05.537841   97516 round_trippers.go:580]     Date: Mon, 21 Aug 2023 10:54:05 GMT
	I0821 10:54:05.537851   97516 round_trippers.go:580]     Audit-Id: a6571531-42dd-4ebc-8a51-102a4e45f90a
	I0821 10:54:05.537860   97516 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 10:54:05.537870   97516 round_trippers.go:580]     Content-Type: application/json
	I0821 10:54:05.537883   97516 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 6d267e50-2a46-416e-9f98-5cbbb95ca0e9
	I0821 10:54:05.537894   97516 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: a3fb5809-e221-45b7-9fc2-4142f504bbea
	I0821 10:54:05.538057   97516 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"492"},"items":[{"metadata":{"name":"multinode-200985","uid":"6cdd4b3b-67f3-40a2-bdc4-978a7949ab1c","resourceVersion":"388","creationTimestamp":"2023-08-21T10:52:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-200985","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-200985","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T10_52_48_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12168 chars]
	I0821 10:54:05.538516   97516 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0821 10:54:05.538532   97516 node_conditions.go:123] node cpu capacity is 8
	I0821 10:54:05.538543   97516 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0821 10:54:05.538549   97516 node_conditions.go:123] node cpu capacity is 8
	I0821 10:54:05.538555   97516 node_conditions.go:105] duration metric: took 190.552073ms to run NodePressure ...
	I0821 10:54:05.538572   97516 start.go:228] waiting for startup goroutines ...
	I0821 10:54:05.538607   97516 start.go:242] writing updated cluster config ...
	I0821 10:54:05.538895   97516 ssh_runner.go:195] Run: rm -f paused
	I0821 10:54:05.584171   97516 start.go:600] kubectl: 1.28.0, cluster: 1.27.4 (minor skew: 1)
	I0821 10:54:05.587672   97516 out.go:177] * Done! kubectl is now configured to use "multinode-200985" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Aug 21 10:53:32 multinode-200985 crio[956]: time="2023-08-21 10:53:32.516615682Z" level=info msg="Starting container: a88557ce9e3b696321b0b6b71b26f6bdcffa140e25f43ee3ba08ed5506ae7553" id=967953fd-6f8b-4882-b2a5-874700fe78fd name=/runtime.v1.RuntimeService/StartContainer
	Aug 21 10:53:32 multinode-200985 crio[956]: time="2023-08-21 10:53:32.517071875Z" level=info msg="Created container 21e5e477e5c70b3a20586117ea2ebf8ee877d6c30e35132c249df9c8a4c0bd14: kube-system/coredns-5d78c9869d-p7wfm/coredns" id=d211bdd0-5383-47ec-9fd2-e9172ba8b4c3 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 21 10:53:32 multinode-200985 crio[956]: time="2023-08-21 10:53:32.517476599Z" level=info msg="Starting container: 21e5e477e5c70b3a20586117ea2ebf8ee877d6c30e35132c249df9c8a4c0bd14" id=a961ca66-11f4-46bf-8612-d5106617a931 name=/runtime.v1.RuntimeService/StartContainer
	Aug 21 10:53:32 multinode-200985 crio[956]: time="2023-08-21 10:53:32.538623203Z" level=info msg="Started container" PID=2361 containerID=a88557ce9e3b696321b0b6b71b26f6bdcffa140e25f43ee3ba08ed5506ae7553 description=kube-system/storage-provisioner/storage-provisioner id=967953fd-6f8b-4882-b2a5-874700fe78fd name=/runtime.v1.RuntimeService/StartContainer sandboxID=db72f93bb3c51bf65115c477870658693e2ecb1c5f0ff7293cc122039833d0f6
	Aug 21 10:53:32 multinode-200985 crio[956]: time="2023-08-21 10:53:32.540533879Z" level=info msg="Started container" PID=2368 containerID=21e5e477e5c70b3a20586117ea2ebf8ee877d6c30e35132c249df9c8a4c0bd14 description=kube-system/coredns-5d78c9869d-p7wfm/coredns id=a961ca66-11f4-46bf-8612-d5106617a931 name=/runtime.v1.RuntimeService/StartContainer sandboxID=94f3a91b88fe9260fd6dccc650265650ae0d33c286879b41f7ceddaa9f2f9431
	Aug 21 10:54:06 multinode-200985 crio[956]: time="2023-08-21 10:54:06.615206776Z" level=info msg="Running pod sandbox: default/busybox-67b7f59bb-4kkp2/POD" id=b5ddaa89-83ab-4dee-a37e-1145b2e24776 name=/runtime.v1.RuntimeService/RunPodSandbox
	Aug 21 10:54:06 multinode-200985 crio[956]: time="2023-08-21 10:54:06.615281227Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 21 10:54:06 multinode-200985 crio[956]: time="2023-08-21 10:54:06.627691439Z" level=info msg="Got pod network &{Name:busybox-67b7f59bb-4kkp2 Namespace:default ID:adafce3896b992e6ed9c7591f548e1ba464dee077f0306da132800b3afc45dbe UID:35f55c4e-c20d-426e-b324-aa12b9425519 NetNS:/var/run/netns/4e12aa8e-386c-4408-b44a-10a7563d9512 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 21 10:54:06 multinode-200985 crio[956]: time="2023-08-21 10:54:06.627720789Z" level=info msg="Adding pod default_busybox-67b7f59bb-4kkp2 to CNI network \"kindnet\" (type=ptp)"
	Aug 21 10:54:06 multinode-200985 crio[956]: time="2023-08-21 10:54:06.637834614Z" level=info msg="Got pod network &{Name:busybox-67b7f59bb-4kkp2 Namespace:default ID:adafce3896b992e6ed9c7591f548e1ba464dee077f0306da132800b3afc45dbe UID:35f55c4e-c20d-426e-b324-aa12b9425519 NetNS:/var/run/netns/4e12aa8e-386c-4408-b44a-10a7563d9512 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 21 10:54:06 multinode-200985 crio[956]: time="2023-08-21 10:54:06.637941552Z" level=info msg="Checking pod default_busybox-67b7f59bb-4kkp2 for CNI network kindnet (type=ptp)"
	Aug 21 10:54:06 multinode-200985 crio[956]: time="2023-08-21 10:54:06.662247182Z" level=info msg="Ran pod sandbox adafce3896b992e6ed9c7591f548e1ba464dee077f0306da132800b3afc45dbe with infra container: default/busybox-67b7f59bb-4kkp2/POD" id=b5ddaa89-83ab-4dee-a37e-1145b2e24776 name=/runtime.v1.RuntimeService/RunPodSandbox
	Aug 21 10:54:06 multinode-200985 crio[956]: time="2023-08-21 10:54:06.663268955Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=55a08167-6aa2-4c1a-b0c8-30a785436809 name=/runtime.v1.ImageService/ImageStatus
	Aug 21 10:54:06 multinode-200985 crio[956]: time="2023-08-21 10:54:06.663534132Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=55a08167-6aa2-4c1a-b0c8-30a785436809 name=/runtime.v1.ImageService/ImageStatus
	Aug 21 10:54:06 multinode-200985 crio[956]: time="2023-08-21 10:54:06.664252611Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=c48606b4-cc5d-49cd-9955-8932be63c3de name=/runtime.v1.ImageService/PullImage
	Aug 21 10:54:06 multinode-200985 crio[956]: time="2023-08-21 10:54:06.667332739Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Aug 21 10:54:06 multinode-200985 crio[956]: time="2023-08-21 10:54:06.816568081Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Aug 21 10:54:07 multinode-200985 crio[956]: time="2023-08-21 10:54:07.186494602Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=c48606b4-cc5d-49cd-9955-8932be63c3de name=/runtime.v1.ImageService/PullImage
	Aug 21 10:54:07 multinode-200985 crio[956]: time="2023-08-21 10:54:07.187452086Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=abd54ff5-c1d7-4ea9-a5e3-8bcdb1ce29cc name=/runtime.v1.ImageService/ImageStatus
	Aug 21 10:54:07 multinode-200985 crio[956]: time="2023-08-21 10:54:07.188041176Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=abd54ff5-c1d7-4ea9-a5e3-8bcdb1ce29cc name=/runtime.v1.ImageService/ImageStatus
	Aug 21 10:54:07 multinode-200985 crio[956]: time="2023-08-21 10:54:07.188738384Z" level=info msg="Creating container: default/busybox-67b7f59bb-4kkp2/busybox" id=cab4bea2-32cb-4316-966a-2039fcee50fb name=/runtime.v1.RuntimeService/CreateContainer
	Aug 21 10:54:07 multinode-200985 crio[956]: time="2023-08-21 10:54:07.188834287Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 21 10:54:07 multinode-200985 crio[956]: time="2023-08-21 10:54:07.263633752Z" level=info msg="Created container 4ab012918cc1ba49151b200284e5bda8f92393fa6427ed0d362acbfec1a5fa8a: default/busybox-67b7f59bb-4kkp2/busybox" id=cab4bea2-32cb-4316-966a-2039fcee50fb name=/runtime.v1.RuntimeService/CreateContainer
	Aug 21 10:54:07 multinode-200985 crio[956]: time="2023-08-21 10:54:07.264237198Z" level=info msg="Starting container: 4ab012918cc1ba49151b200284e5bda8f92393fa6427ed0d362acbfec1a5fa8a" id=45c14b4a-25bc-4573-8810-6fe3008fb0fe name=/runtime.v1.RuntimeService/StartContainer
	Aug 21 10:54:07 multinode-200985 crio[956]: time="2023-08-21 10:54:07.272075985Z" level=info msg="Started container" PID=2542 containerID=4ab012918cc1ba49151b200284e5bda8f92393fa6427ed0d362acbfec1a5fa8a description=default/busybox-67b7f59bb-4kkp2/busybox id=45c14b4a-25bc-4573-8810-6fe3008fb0fe name=/runtime.v1.RuntimeService/StartContainer sandboxID=adafce3896b992e6ed9c7591f548e1ba464dee077f0306da132800b3afc45dbe
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	4ab012918cc1b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   adafce3896b99       busybox-67b7f59bb-4kkp2
	21e5e477e5c70       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      38 seconds ago       Running             coredns                   0                   94f3a91b88fe9       coredns-5d78c9869d-p7wfm
	a88557ce9e3b6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      38 seconds ago       Running             storage-provisioner       0                   db72f93bb3c51       storage-provisioner
	396104f317dc0       6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4                                      About a minute ago   Running             kube-proxy                0                   4f5a18f920c2d       kube-proxy-hr82h
	f1c6d7963dc69       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da                                      About a minute ago   Running             kindnet-cni               0                   d27dc6b894e59       kindnet-l9qdc
	f27c8ba185223       e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c                                      About a minute ago   Running             kube-apiserver            0                   561d7cf73cac7       kube-apiserver-multinode-200985
	f7eeff82341e6       f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5                                      About a minute ago   Running             kube-controller-manager   0                   f19686e057836       kube-controller-manager-multinode-200985
	03cd66117d924       98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16                                      About a minute ago   Running             kube-scheduler            0                   e019132649793       kube-scheduler-multinode-200985
	1346c4f1a47df       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681                                      About a minute ago   Running             etcd                      0                   65f7038235de3       etcd-multinode-200985
	
	* 
	* ==> coredns [21e5e477e5c70b3a20586117ea2ebf8ee877d6c30e35132c249df9c8a4c0bd14] <==
	* [INFO] 10.244.0.3:40609 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000078012s
	[INFO] 10.244.1.2:57254 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107396s
	[INFO] 10.244.1.2:53836 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001515704s
	[INFO] 10.244.1.2:49780 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000080876s
	[INFO] 10.244.1.2:35048 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000065093s
	[INFO] 10.244.1.2:41449 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.000931406s
	[INFO] 10.244.1.2:49507 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000079255s
	[INFO] 10.244.1.2:59793 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000056399s
	[INFO] 10.244.1.2:53506 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072654s
	[INFO] 10.244.0.3:44273 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109535s
	[INFO] 10.244.0.3:35212 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077026s
	[INFO] 10.244.0.3:43364 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000063607s
	[INFO] 10.244.0.3:39038 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000038138s
	[INFO] 10.244.1.2:60979 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116036s
	[INFO] 10.244.1.2:51792 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000114764s
	[INFO] 10.244.1.2:48115 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000085177s
	[INFO] 10.244.1.2:34210 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000050272s
	[INFO] 10.244.0.3:39673 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000099476s
	[INFO] 10.244.0.3:58773 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000115331s
	[INFO] 10.244.0.3:53135 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000088965s
	[INFO] 10.244.0.3:56545 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00007857s
	[INFO] 10.244.1.2:41861 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130127s
	[INFO] 10.244.1.2:42985 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000090677s
	[INFO] 10.244.1.2:52189 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000064391s
	[INFO] 10.244.1.2:46802 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00005327s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-200985
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-200985
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43
	                    minikube.k8s.io/name=multinode-200985
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_21T10_52_48_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 21 Aug 2023 10:52:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-200985
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 21 Aug 2023 10:54:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 21 Aug 2023 10:53:32 +0000   Mon, 21 Aug 2023 10:52:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 21 Aug 2023 10:53:32 +0000   Mon, 21 Aug 2023 10:52:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 21 Aug 2023 10:53:32 +0000   Mon, 21 Aug 2023 10:52:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 21 Aug 2023 10:53:32 +0000   Mon, 21 Aug 2023 10:53:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-200985
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	System Info:
	  Machine ID:                 d0f31c9dc0da4197a319e15b4b97dfdb
	  System UUID:                c8979f18-7023-4180-b7a5-09a6dd60e9c9
	  Boot ID:                    19bba9d5-fb53-4c36-8f17-b39d772f0931
	  Kernel Version:             5.15.0-1039-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-4kkp2                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 coredns-5d78c9869d-p7wfm                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     70s
	  kube-system                 etcd-multinode-200985                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         84s
	  kube-system                 kindnet-l9qdc                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      70s
	  kube-system                 kube-apiserver-multinode-200985             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 kube-controller-manager-multinode-200985    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 kube-proxy-hr82h                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 kube-scheduler-multinode-200985             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 69s   kube-proxy       
	  Normal  Starting                 84s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  84s   kubelet          Node multinode-200985 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    84s   kubelet          Node multinode-200985 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     84s   kubelet          Node multinode-200985 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           71s   node-controller  Node multinode-200985 event: Registered Node multinode-200985 in Controller
	  Normal  NodeReady                39s   kubelet          Node multinode-200985 status is now: NodeReady
	
	
	Name:               multinode-200985-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-200985-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 21 Aug 2023 10:53:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-200985-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 21 Aug 2023 10:54:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 21 Aug 2023 10:54:03 +0000   Mon, 21 Aug 2023 10:53:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 21 Aug 2023 10:54:03 +0000   Mon, 21 Aug 2023 10:53:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 21 Aug 2023 10:54:03 +0000   Mon, 21 Aug 2023 10:53:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 21 Aug 2023 10:54:03 +0000   Mon, 21 Aug 2023 10:54:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-200985-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	System Info:
	  Machine ID:                 fda87d2fe85a4fa58f9cc44f4992fecc
	  System UUID:                b72f5446-fdb5-479c-965a-13d8e7f9984b
	  Boot ID:                    19bba9d5-fb53-4c36-8f17-b39d772f0931
	  Kernel Version:             5.15.0-1039-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-vtjvj    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 kindnet-t6nk2              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      25s
	  kube-system                 kube-proxy-fc8dc           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12s                kube-proxy       
	  Normal  NodeHasSufficientMemory  25s (x5 over 26s)  kubelet          Node multinode-200985-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x5 over 26s)  kubelet          Node multinode-200985-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x5 over 26s)  kubelet          Node multinode-200985-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21s                node-controller  Node multinode-200985-m02 event: Registered Node multinode-200985-m02 in Controller
	  Normal  NodeReady                8s                 kubelet          Node multinode-200985-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.004916] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006573] FS-Cache: N-cookie d=0000000057af7611{9p.inode} n=0000000097034eab
	[  +0.007347] FS-Cache: N-key=[8] '0690130200000000'
	[  +2.906182] FS-Cache: Duplicate cookie detected
	[  +0.004707] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.006744] FS-Cache: O-cookie d=00000000f9f2d848{9P.session} n=000000004e5885ae
	[  +0.007517] FS-Cache: O-key=[10] '34323935323639393534'
	[  +0.005373] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.006560] FS-Cache: N-cookie d=00000000f9f2d848{9P.session} n=000000005c3d05d3
	[  +0.007520] FS-Cache: N-key=[10] '34323935323639393534'
	[ +16.357657] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Aug21 10:45] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ca c1 a8 91 e2 bd c2 67 4a c6 ee 9c 08 00
	[  +1.028097] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: ca c1 a8 91 e2 bd c2 67 4a c6 ee 9c 08 00
	[  +2.015757] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: ca c1 a8 91 e2 bd c2 67 4a c6 ee 9c 08 00
	[  +4.063569] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ca c1 a8 91 e2 bd c2 67 4a c6 ee 9c 08 00
	[  +8.191209] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ca c1 a8 91 e2 bd c2 67 4a c6 ee 9c 08 00
	[ +16.126462] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: ca c1 a8 91 e2 bd c2 67 4a c6 ee 9c 08 00
	[Aug21 10:46] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ca c1 a8 91 e2 bd c2 67 4a c6 ee 9c 08 00
	
	* 
	* ==> etcd [1346c4f1a47df0d9926f88b15af04d89e12c7e1f1f76437c9b5f0e6441a31e1e] <==
	* {"level":"info","ts":"2023-08-21T10:52:42.054Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-08-21T10:52:42.056Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-08-21T10:52:42.056Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-08-21T10:52:42.056Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-08-21T10:52:42.056Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-08-21T10:52:42.056Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-08-21T10:52:42.246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-08-21T10:52:42.246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-08-21T10:52:42.246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-08-21T10:52:42.246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-08-21T10:52:42.246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-08-21T10:52:42.246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-08-21T10:52:42.246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-08-21T10:52:42.247Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T10:52:42.248Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-200985 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-21T10:52:42.248Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-21T10:52:42.248Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-21T10:52:42.248Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-21T10:52:42.248Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-08-21T10:52:42.248Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T10:52:42.248Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T10:52:42.249Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T10:52:42.249Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-08-21T10:52:42.249Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-21T10:53:38.739Z","caller":"traceutil/trace.go:171","msg":"trace[586419751] transaction","detail":"{read_only:false; response_revision:417; number_of_response:1; }","duration":"145.927475ms","start":"2023-08-21T10:53:38.593Z","end":"2023-08-21T10:53:38.739Z","steps":["trace[586419751] 'process raft request'  (duration: 145.823627ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  10:54:11 up 36 min,  0 users,  load average: 0.94, 1.07, 0.70
	Linux multinode-200985 5.15.0-1039-gcp #47~20.04.1-Ubuntu SMP Thu Jul 27 22:40:03 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [f1c6d7963dc69802ff3b3468ba21f5bd4b1771747673f338513a9ebfa45677b8] <==
	* I0821 10:53:01.636920       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0821 10:53:01.637005       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I0821 10:53:01.637155       1 main.go:116] setting mtu 1500 for CNI 
	I0821 10:53:01.637167       1 main.go:146] kindnetd IP family: "ipv4"
	I0821 10:53:01.637188       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0821 10:53:31.871550       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0821 10:53:31.878706       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0821 10:53:31.878729       1 main.go:227] handling current node
	I0821 10:53:41.892292       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0821 10:53:41.892316       1 main.go:227] handling current node
	I0821 10:53:51.905013       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0821 10:53:51.905039       1 main.go:227] handling current node
	I0821 10:53:51.905048       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0821 10:53:51.905052       1 main.go:250] Node multinode-200985-m02 has CIDR [10.244.1.0/24] 
	I0821 10:53:51.905205       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I0821 10:54:01.909734       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0821 10:54:01.909756       1 main.go:227] handling current node
	I0821 10:54:01.909765       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0821 10:54:01.909770       1 main.go:250] Node multinode-200985-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [f27c8ba185223b21c117e1052ef91419930702ebac1c82eaa88b5af83db72c75] <==
	* I0821 10:52:44.435559       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0821 10:52:44.435649       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0821 10:52:44.436047       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0821 10:52:44.437441       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0821 10:52:44.437511       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0821 10:52:44.437598       1 shared_informer.go:318] Caches are synced for configmaps
	I0821 10:52:44.438452       1 controller.go:624] quota admission added evaluator for: namespaces
	E0821 10:52:44.440548       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0821 10:52:44.644044       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0821 10:52:45.052156       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0821 10:52:45.277875       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0821 10:52:45.281394       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0821 10:52:45.281412       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0821 10:52:45.640596       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0821 10:52:45.673738       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0821 10:52:45.751194       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0821 10:52:45.757391       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0821 10:52:45.758334       1 controller.go:624] quota admission added evaluator for: endpoints
	I0821 10:52:45.762160       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0821 10:52:46.347112       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0821 10:52:47.263519       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0821 10:52:47.273827       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0821 10:52:47.281771       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0821 10:53:01.070426       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0821 10:53:01.121789       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [f7eeff82341e62b739dc9f033582ef9c8981d77513d2f82ab6130435916d8185] <==
	* I0821 10:53:00.300930       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0821 10:53:00.321992       1 shared_informer.go:318] Caches are synced for resource quota
	I0821 10:53:00.347131       1 shared_informer.go:318] Caches are synced for resource quota
	I0821 10:53:00.419124       1 shared_informer.go:318] Caches are synced for attach detach
	I0821 10:53:00.739670       1 shared_informer.go:318] Caches are synced for garbage collector
	I0821 10:53:00.767964       1 shared_informer.go:318] Caches are synced for garbage collector
	I0821 10:53:00.767998       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0821 10:53:01.078573       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-l9qdc"
	I0821 10:53:01.079910       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-hr82h"
	I0821 10:53:01.125252       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5d78c9869d to 2"
	I0821 10:53:01.225124       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-ld2zf"
	I0821 10:53:01.230666       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-p7wfm"
	I0821 10:53:01.656209       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5d78c9869d to 1 from 2"
	I0821 10:53:01.672849       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5d78c9869d-ld2zf"
	I0821 10:53:35.173266       1 node_lifecycle_controller.go:1046] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0821 10:53:46.536104       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-200985-m02\" does not exist"
	I0821 10:53:46.541079       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-200985-m02" podCIDRs=[10.244.1.0/24]
	I0821 10:53:46.545923       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-t6nk2"
	I0821 10:53:46.545952       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-fc8dc"
	I0821 10:53:50.176142       1 event.go:307] "Event occurred" object="multinode-200985-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-200985-m02 event: Registered Node multinode-200985-m02 in Controller"
	I0821 10:53:50.176159       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-200985-m02"
	W0821 10:54:03.857424       1 topologycache.go:232] Can't get CPU or zone information for multinode-200985-m02 node
	I0821 10:54:06.294477       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-67b7f59bb to 2"
	I0821 10:54:06.301370       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-vtjvj"
	I0821 10:54:06.305824       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-4kkp2"
	
	* 
	* ==> kube-proxy [396104f317dc0f03135d076432b15bc59f6e750cc071ee1fe0995d9b623c3f13] <==
	* I0821 10:53:01.672337       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I0821 10:53:01.672429       1 server_others.go:110] "Detected node IP" address="192.168.58.2"
	I0821 10:53:01.672452       1 server_others.go:554] "Using iptables proxy"
	I0821 10:53:01.759241       1 server_others.go:192] "Using iptables Proxier"
	I0821 10:53:01.759268       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0821 10:53:01.759278       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0821 10:53:01.759290       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0821 10:53:01.759318       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0821 10:53:01.759935       1 server.go:658] "Version info" version="v1.27.4"
	I0821 10:53:01.759997       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0821 10:53:01.760629       1 config.go:97] "Starting endpoint slice config controller"
	I0821 10:53:01.760691       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0821 10:53:01.763676       1 config.go:188] "Starting service config controller"
	I0821 10:53:01.763696       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0821 10:53:01.760662       1 config.go:315] "Starting node config controller"
	I0821 10:53:01.763717       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0821 10:53:01.861496       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0821 10:53:01.864741       1 shared_informer.go:318] Caches are synced for service config
	I0821 10:53:01.864742       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [03cd66117d9240f59c1ab51e75d31b12fcc1344ae4f69c5ee532cab1846ac902] <==
	* W0821 10:52:44.536403       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0821 10:52:44.536517       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0821 10:52:44.536076       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0821 10:52:44.536660       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0821 10:52:44.536765       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0821 10:52:44.536846       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0821 10:52:44.536314       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0821 10:52:44.536329       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0821 10:52:44.536457       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0821 10:52:44.536939       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0821 10:52:44.536473       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0821 10:52:44.536975       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0821 10:52:44.536767       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0821 10:52:44.536996       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0821 10:52:45.350479       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0821 10:52:45.350518       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0821 10:52:45.411096       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0821 10:52:45.411133       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0821 10:52:45.464021       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0821 10:52:45.464056       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0821 10:52:45.487256       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0821 10:52:45.487292       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0821 10:52:45.510490       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0821 10:52:45.510523       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0821 10:52:48.058907       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Aug 21 10:53:01 multinode-200985 kubelet[1591]: I0821 10:53:01.139497    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c4817fa-d083-4bc4-9e1b-a98f77433293-lib-modules\") pod \"kube-proxy-hr82h\" (UID: \"3c4817fa-d083-4bc4-9e1b-a98f77433293\") " pod="kube-system/kube-proxy-hr82h"
	Aug 21 10:53:01 multinode-200985 kubelet[1591]: I0821 10:53:01.139642    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md9wb\" (UniqueName: \"kubernetes.io/projected/a4612ce4-c44d-48c7-88d3-a03b659ddef3-kube-api-access-md9wb\") pod \"kindnet-l9qdc\" (UID: \"a4612ce4-c44d-48c7-88d3-a03b659ddef3\") " pod="kube-system/kindnet-l9qdc"
	Aug 21 10:53:01 multinode-200985 kubelet[1591]: I0821 10:53:01.139704    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3c4817fa-d083-4bc4-9e1b-a98f77433293-kube-proxy\") pod \"kube-proxy-hr82h\" (UID: \"3c4817fa-d083-4bc4-9e1b-a98f77433293\") " pod="kube-system/kube-proxy-hr82h"
	Aug 21 10:53:01 multinode-200985 kubelet[1591]: I0821 10:53:01.139738    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c4817fa-d083-4bc4-9e1b-a98f77433293-xtables-lock\") pod \"kube-proxy-hr82h\" (UID: \"3c4817fa-d083-4bc4-9e1b-a98f77433293\") " pod="kube-system/kube-proxy-hr82h"
	Aug 21 10:53:01 multinode-200985 kubelet[1591]: I0821 10:53:01.139768    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4612ce4-c44d-48c7-88d3-a03b659ddef3-lib-modules\") pod \"kindnet-l9qdc\" (UID: \"a4612ce4-c44d-48c7-88d3-a03b659ddef3\") " pod="kube-system/kindnet-l9qdc"
	Aug 21 10:53:01 multinode-200985 kubelet[1591]: I0821 10:53:01.139797    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znbbj\" (UniqueName: \"kubernetes.io/projected/3c4817fa-d083-4bc4-9e1b-a98f77433293-kube-api-access-znbbj\") pod \"kube-proxy-hr82h\" (UID: \"3c4817fa-d083-4bc4-9e1b-a98f77433293\") " pod="kube-system/kube-proxy-hr82h"
	Aug 21 10:53:01 multinode-200985 kubelet[1591]: W0821 10:53:01.432157    1591 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/30a11af662edf967ffb99de2ef034ce516ea0aacab8a798c9436236a541bf91a/crio-d27dc6b894e59aaaa0ff238adf7d5a4f30fb3efa3227a163da9628f31dcfaa6f WatchSource:0}: Error finding container d27dc6b894e59aaaa0ff238adf7d5a4f30fb3efa3227a163da9628f31dcfaa6f: Status 404 returned error can't find the container with id d27dc6b894e59aaaa0ff238adf7d5a4f30fb3efa3227a163da9628f31dcfaa6f
	Aug 21 10:53:01 multinode-200985 kubelet[1591]: W0821 10:53:01.432453    1591 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/30a11af662edf967ffb99de2ef034ce516ea0aacab8a798c9436236a541bf91a/crio-4f5a18f920c2dd680ecf8123568cb5810907bac1dfdf82fe4a1421aa35178d9e WatchSource:0}: Error finding container 4f5a18f920c2dd680ecf8123568cb5810907bac1dfdf82fe4a1421aa35178d9e: Status 404 returned error can't find the container with id 4f5a18f920c2dd680ecf8123568cb5810907bac1dfdf82fe4a1421aa35178d9e
	Aug 21 10:53:02 multinode-200985 kubelet[1591]: I0821 10:53:02.469485    1591 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-hr82h" podStartSLOduration=1.469449741 podCreationTimestamp="2023-08-21 10:53:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-21 10:53:02.457971281 +0000 UTC m=+15.220426318" watchObservedRunningTime="2023-08-21 10:53:02.469449741 +0000 UTC m=+15.231904778"
	Aug 21 10:53:32 multinode-200985 kubelet[1591]: I0821 10:53:32.087024    1591 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Aug 21 10:53:32 multinode-200985 kubelet[1591]: I0821 10:53:32.109630    1591 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-l9qdc" podStartSLOduration=31.109578049 podCreationTimestamp="2023-08-21 10:53:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-21 10:53:02.469663607 +0000 UTC m=+15.232118645" watchObservedRunningTime="2023-08-21 10:53:32.109578049 +0000 UTC m=+44.872033087"
	Aug 21 10:53:32 multinode-200985 kubelet[1591]: I0821 10:53:32.109884    1591 topology_manager.go:212] "Topology Admit Handler"
	Aug 21 10:53:32 multinode-200985 kubelet[1591]: I0821 10:53:32.111137    1591 topology_manager.go:212] "Topology Admit Handler"
	Aug 21 10:53:32 multinode-200985 kubelet[1591]: I0821 10:53:32.161551    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e31fc5d6-efb4-4659-95e0-45e4b0319116-config-volume\") pod \"coredns-5d78c9869d-p7wfm\" (UID: \"e31fc5d6-efb4-4659-95e0-45e4b0319116\") " pod="kube-system/coredns-5d78c9869d-p7wfm"
	Aug 21 10:53:32 multinode-200985 kubelet[1591]: I0821 10:53:32.161615    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/eb07b693-169e-45aa-999e-989f9eb6ae77-tmp\") pod \"storage-provisioner\" (UID: \"eb07b693-169e-45aa-999e-989f9eb6ae77\") " pod="kube-system/storage-provisioner"
	Aug 21 10:53:32 multinode-200985 kubelet[1591]: I0821 10:53:32.161650    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz6wp\" (UniqueName: \"kubernetes.io/projected/eb07b693-169e-45aa-999e-989f9eb6ae77-kube-api-access-qz6wp\") pod \"storage-provisioner\" (UID: \"eb07b693-169e-45aa-999e-989f9eb6ae77\") " pod="kube-system/storage-provisioner"
	Aug 21 10:53:32 multinode-200985 kubelet[1591]: I0821 10:53:32.161698    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcknj\" (UniqueName: \"kubernetes.io/projected/e31fc5d6-efb4-4659-95e0-45e4b0319116-kube-api-access-qcknj\") pod \"coredns-5d78c9869d-p7wfm\" (UID: \"e31fc5d6-efb4-4659-95e0-45e4b0319116\") " pod="kube-system/coredns-5d78c9869d-p7wfm"
	Aug 21 10:53:32 multinode-200985 kubelet[1591]: W0821 10:53:32.448211    1591 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/30a11af662edf967ffb99de2ef034ce516ea0aacab8a798c9436236a541bf91a/crio-db72f93bb3c51bf65115c477870658693e2ecb1c5f0ff7293cc122039833d0f6 WatchSource:0}: Error finding container db72f93bb3c51bf65115c477870658693e2ecb1c5f0ff7293cc122039833d0f6: Status 404 returned error can't find the container with id db72f93bb3c51bf65115c477870658693e2ecb1c5f0ff7293cc122039833d0f6
	Aug 21 10:53:32 multinode-200985 kubelet[1591]: W0821 10:53:32.448509    1591 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/30a11af662edf967ffb99de2ef034ce516ea0aacab8a798c9436236a541bf91a/crio-94f3a91b88fe9260fd6dccc650265650ae0d33c286879b41f7ceddaa9f2f9431 WatchSource:0}: Error finding container 94f3a91b88fe9260fd6dccc650265650ae0d33c286879b41f7ceddaa9f2f9431: Status 404 returned error can't find the container with id 94f3a91b88fe9260fd6dccc650265650ae0d33c286879b41f7ceddaa9f2f9431
	Aug 21 10:53:33 multinode-200985 kubelet[1591]: I0821 10:53:33.508866    1591 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-p7wfm" podStartSLOduration=32.508803001 podCreationTimestamp="2023-08-21 10:53:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-21 10:53:33.499456478 +0000 UTC m=+46.261911516" watchObservedRunningTime="2023-08-21 10:53:33.508803001 +0000 UTC m=+46.271258064"
	Aug 21 10:53:33 multinode-200985 kubelet[1591]: I0821 10:53:33.509324    1591 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=31.509276508 podCreationTimestamp="2023-08-21 10:53:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-21 10:53:33.509014101 +0000 UTC m=+46.271469139" watchObservedRunningTime="2023-08-21 10:53:33.509276508 +0000 UTC m=+46.271731558"
	Aug 21 10:54:06 multinode-200985 kubelet[1591]: I0821 10:54:06.313420    1591 topology_manager.go:212] "Topology Admit Handler"
	Aug 21 10:54:06 multinode-200985 kubelet[1591]: I0821 10:54:06.355305    1591 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86pgz\" (UniqueName: \"kubernetes.io/projected/35f55c4e-c20d-426e-b324-aa12b9425519-kube-api-access-86pgz\") pod \"busybox-67b7f59bb-4kkp2\" (UID: \"35f55c4e-c20d-426e-b324-aa12b9425519\") " pod="default/busybox-67b7f59bb-4kkp2"
	Aug 21 10:54:06 multinode-200985 kubelet[1591]: W0821 10:54:06.660294    1591 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/30a11af662edf967ffb99de2ef034ce516ea0aacab8a798c9436236a541bf91a/crio-adafce3896b992e6ed9c7591f548e1ba464dee077f0306da132800b3afc45dbe WatchSource:0}: Error finding container adafce3896b992e6ed9c7591f548e1ba464dee077f0306da132800b3afc45dbe: Status 404 returned error can't find the container with id adafce3896b992e6ed9c7591f548e1ba464dee077f0306da132800b3afc45dbe
	Aug 21 10:54:07 multinode-200985 kubelet[1591]: I0821 10:54:07.559222    1591 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-67b7f59bb-4kkp2" podStartSLOduration=1.0359144279999999 podCreationTimestamp="2023-08-21 10:54:06 +0000 UTC" firstStartedPulling="2023-08-21 10:54:06.663711425 +0000 UTC m=+79.426166459" lastFinishedPulling="2023-08-21 10:54:07.186973377 +0000 UTC m=+79.949428396" observedRunningTime="2023-08-21 10:54:07.558986094 +0000 UTC m=+80.321441132" watchObservedRunningTime="2023-08-21 10:54:07.559176365 +0000 UTC m=+80.321631402"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-200985 -n multinode-200985
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-200985 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.19s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (91.9s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.9.0.4100555988.exe start -p running-upgrade-619999 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0821 11:03:43.839050   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/functional-923429/client.crt: no such file or directory
version_upgrade_test.go:132: (dbg) Done: /tmp/minikube-v1.9.0.4100555988.exe start -p running-upgrade-619999 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m24.92126516s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-619999 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:142: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-619999 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (3.823219157s)

                                                
                                                
-- stdout --
	* [running-upgrade-619999] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17102
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17102-5717/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-5717/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.4
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-619999 in cluster running-upgrade-619999
	* Pulling base image ...
	* Updating the running docker "running-upgrade-619999" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 11:05:07.366529  166935 out.go:296] Setting OutFile to fd 1 ...
	I0821 11:05:07.366747  166935 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:05:07.366773  166935 out.go:309] Setting ErrFile to fd 2...
	I0821 11:05:07.366788  166935 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:05:07.367032  166935 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-5717/.minikube/bin
	I0821 11:05:07.367710  166935 out.go:303] Setting JSON to false
	I0821 11:05:07.369430  166935 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2857,"bootTime":1692613050,"procs":411,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0821 11:05:07.369524  166935 start.go:138] virtualization: kvm guest
	I0821 11:05:07.371776  166935 out.go:177] * [running-upgrade-619999] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0821 11:05:07.374002  166935 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 11:05:07.373875  166935 notify.go:220] Checking for updates...
	I0821 11:05:07.376631  166935 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 11:05:07.378445  166935 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17102-5717/kubeconfig
	I0821 11:05:07.380321  166935 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-5717/.minikube
	I0821 11:05:07.382266  166935 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0821 11:05:07.384665  166935 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 11:05:07.389653  166935 config.go:182] Loaded profile config "running-upgrade-619999": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0821 11:05:07.389706  166935 start_flags.go:695] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0821 11:05:07.392903  166935 out.go:177] * Kubernetes 1.27.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.4
	I0821 11:05:07.395304  166935 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 11:05:07.441898  166935 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0821 11:05:07.441993  166935 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 11:05:07.509332  166935 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:75 OomKillDisable:true NGoroutines:81 SystemTime:2023-08-21 11:05:07.497686518 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0821 11:05:07.509467  166935 docker.go:294] overlay module found
	I0821 11:05:07.511281  166935 out.go:177] * Using the docker driver based on existing profile
	I0821 11:05:07.512726  166935 start.go:298] selected driver: docker
	I0821 11:05:07.512744  166935 start.go:902] validating driver "docker" against &{Name:running-upgrade-619999 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-619999 Namespace: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 11:05:07.512863  166935 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 11:05:07.513733  166935 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 11:05:07.581431  166935 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:58 SystemTime:2023-08-21 11:05:07.571676113 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0821 11:05:07.581737  166935 cni.go:84] Creating CNI manager for ""
	I0821 11:05:07.581759  166935 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0821 11:05:07.581767  166935 start_flags.go:319] config:
	{Name:running-upgrade-619999 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-619999 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 11:05:07.583947  166935 out.go:177] * Starting control plane node running-upgrade-619999 in cluster running-upgrade-619999
	I0821 11:05:07.585714  166935 cache.go:122] Beginning downloading kic base image for docker with crio
	I0821 11:05:07.587277  166935 out.go:177] * Pulling base image ...
	I0821 11:05:07.588694  166935 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0821 11:05:07.588821  166935 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0821 11:05:07.609909  166935 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0821 11:05:07.609935  166935 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	W0821 11:05:07.629513  166935 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0821 11:05:07.629650  166935 profile.go:148] Saving config to /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/running-upgrade-619999/config.json ...
	I0821 11:05:07.629799  166935 cache.go:107] acquiring lock: {Name:mkf46660acdf7ff03e108bf1cf65b1fef438520b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:05:07.629810  166935 cache.go:107] acquiring lock: {Name:mke72a81dd41e23a45d9a75f85e8ccd88500d8df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:05:07.629797  166935 cache.go:107] acquiring lock: {Name:mk5348f13c23b9533a2e2ad38a7e985b30bc9819 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:05:07.629853  166935 cache.go:107] acquiring lock: {Name:mkd8ba3f69927e1e8ea102f808e07f6f57464583 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:05:07.629871  166935 cache.go:195] Successfully downloaded all kic artifacts
	I0821 11:05:07.629853  166935 cache.go:107] acquiring lock: {Name:mk90c4e563f6a7df67dc357b0dbdc42a5d1fe77c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:05:07.629864  166935 cache.go:107] acquiring lock: {Name:mkf62c953ab8ad47afd65d04bafeb9b4d807eee7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:05:07.629876  166935 cache.go:107] acquiring lock: {Name:mk8fab502b40c040bfbe4c7347a87eb74f2172f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:05:07.629898  166935 start.go:365] acquiring machines lock for running-upgrade-619999: {Name:mk45fa1d719fa4e0d0e97a9659d43d870ae1acf3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:05:07.629969  166935 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0821 11:05:07.629937  166935 cache.go:107] acquiring lock: {Name:mk87ad3bdd226d03bb02cbfe19c98cb195db50d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:05:07.629989  166935 cache.go:115] /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0821 11:05:07.630001  166935 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 211.291µs
	I0821 11:05:07.629981  166935 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.0
	I0821 11:05:07.630047  166935 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.0
	I0821 11:05:07.630055  166935 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.0
	I0821 11:05:07.630070  166935 start.go:369] acquired machines lock for "running-upgrade-619999" in 157.987µs
	I0821 11:05:07.630085  166935 start.go:96] Skipping create...Using existing machine configuration
	I0821 11:05:07.630091  166935 fix.go:54] fixHost starting: m01
	I0821 11:05:07.630098  166935 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.0
	I0821 11:05:07.630106  166935 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0821 11:05:07.630013  166935 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0821 11:05:07.630207  166935 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0821 11:05:07.630367  166935 cli_runner.go:164] Run: docker container inspect running-upgrade-619999 --format={{.State.Status}}
	I0821 11:05:07.631282  166935 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.0
	I0821 11:05:07.631306  166935 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0821 11:05:07.631318  166935 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0821 11:05:07.631333  166935 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.0
	I0821 11:05:07.631494  166935 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.0
	I0821 11:05:07.631620  166935 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0821 11:05:07.631896  166935 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.0
	I0821 11:05:07.652226  166935 fix.go:102] recreateIfNeeded on running-upgrade-619999: state=Running err=<nil>
	W0821 11:05:07.652257  166935 fix.go:128] unexpected machine state, will restart: <nil>
	I0821 11:05:07.654611  166935 out.go:177] * Updating the running docker "running-upgrade-619999" container ...
	I0821 11:05:07.660004  166935 machine.go:88] provisioning docker machine ...
	I0821 11:05:07.660044  166935 ubuntu.go:169] provisioning hostname "running-upgrade-619999"
	I0821 11:05:07.660113  166935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-619999
	I0821 11:05:07.692519  166935 main.go:141] libmachine: Using SSH client type: native
	I0821 11:05:07.693222  166935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 127.0.0.1 32925 <nil> <nil>}
	I0821 11:05:07.693247  166935 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-619999 && echo "running-upgrade-619999" | sudo tee /etc/hostname
	I0821 11:05:07.836969  166935 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-619999
	
	I0821 11:05:07.837047  166935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-619999
	I0821 11:05:07.845479  166935 cache.go:162] opening:  /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0821 11:05:07.854251  166935 main.go:141] libmachine: Using SSH client type: native
	I0821 11:05:07.854704  166935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 127.0.0.1 32925 <nil> <nil>}
	I0821 11:05:07.854730  166935 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-619999' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-619999/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-619999' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0821 11:05:07.855375  166935 cache.go:162] opening:  /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0821 11:05:07.864952  166935 cache.go:162] opening:  /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0
	I0821 11:05:07.870676  166935 cache.go:162] opening:  /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0
	I0821 11:05:07.873361  166935 cache.go:162] opening:  /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0
	I0821 11:05:07.907724  166935 cache.go:162] opening:  /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0821 11:05:07.913757  166935 cache.go:162] opening:  /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0
	I0821 11:05:07.936013  166935 cache.go:157] /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I0821 11:05:07.936036  166935 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 306.196249ms
	I0821 11:05:07.936047  166935 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I0821 11:05:07.963095  166935 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0821 11:05:07.963118  166935 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17102-5717/.minikube CaCertPath:/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17102-5717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17102-5717/.minikube}
	I0821 11:05:07.963143  166935 ubuntu.go:177] setting up certificates
	I0821 11:05:07.963153  166935 provision.go:83] configureAuth start
	I0821 11:05:07.963203  166935 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-619999
	I0821 11:05:07.979761  166935 provision.go:138] copyHostCerts
	I0821 11:05:07.979827  166935 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-5717/.minikube/ca.pem, removing ...
	I0821 11:05:07.979836  166935 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-5717/.minikube/ca.pem
	I0821 11:05:07.979907  166935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17102-5717/.minikube/ca.pem (1078 bytes)
	I0821 11:05:07.980022  166935 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-5717/.minikube/cert.pem, removing ...
	I0821 11:05:07.980032  166935 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-5717/.minikube/cert.pem
	I0821 11:05:07.980065  166935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17102-5717/.minikube/cert.pem (1123 bytes)
	I0821 11:05:07.980149  166935 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-5717/.minikube/key.pem, removing ...
	I0821 11:05:07.980161  166935 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-5717/.minikube/key.pem
	I0821 11:05:07.980254  166935 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17102-5717/.minikube/key.pem (1675 bytes)
	I0821 11:05:07.980395  166935 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17102-5717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-619999 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-619999]
	I0821 11:05:08.292850  166935 provision.go:172] copyRemoteCerts
	I0821 11:05:08.292979  166935 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0821 11:05:08.293039  166935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-619999
	I0821 11:05:08.335057  166935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32925 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/running-upgrade-619999/id_rsa Username:docker}
	I0821 11:05:08.379020  166935 cache.go:157] /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I0821 11:05:08.379047  166935 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 749.133739ms
	I0821 11:05:08.379061  166935 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I0821 11:05:08.431128  166935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0821 11:05:08.451085  166935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0821 11:05:08.517882  166935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0821 11:05:08.555844  166935 provision.go:86] duration metric: configureAuth took 592.565186ms
	I0821 11:05:08.555865  166935 ubuntu.go:193] setting minikube options for container-runtime
	I0821 11:05:08.556087  166935 config.go:182] Loaded profile config "running-upgrade-619999": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0821 11:05:08.556256  166935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-619999
	I0821 11:05:08.583890  166935 main.go:141] libmachine: Using SSH client type: native
	I0821 11:05:08.584548  166935 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 127.0.0.1 32925 <nil> <nil>}
	I0821 11:05:08.584595  166935 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0821 11:05:08.851406  166935 cache.go:157] /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I0821 11:05:08.851452  166935 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 1.221603269s
	I0821 11:05:08.851499  166935 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I0821 11:05:09.112566  166935 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0821 11:05:09.112645  166935 machine.go:91] provisioned docker machine in 1.452621314s
	I0821 11:05:09.112655  166935 start.go:300] post-start starting for "running-upgrade-619999" (driver="docker")
	I0821 11:05:09.112677  166935 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0821 11:05:09.112751  166935 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0821 11:05:09.112797  166935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-619999
	I0821 11:05:09.136649  166935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32925 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/running-upgrade-619999/id_rsa Username:docker}
	I0821 11:05:09.225817  166935 ssh_runner.go:195] Run: cat /etc/os-release
	I0821 11:05:09.228569  166935 cache.go:157] /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I0821 11:05:09.228594  166935 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 1.598799433s
	I0821 11:05:09.228609  166935 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I0821 11:05:09.232700  166935 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0821 11:05:09.232718  166935 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0821 11:05:09.232781  166935 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0821 11:05:09.232788  166935 info.go:137] Remote host: Ubuntu 19.10
	I0821 11:05:09.232799  166935 filesync.go:126] Scanning /home/jenkins/minikube-integration/17102-5717/.minikube/addons for local assets ...
	I0821 11:05:09.232845  166935 filesync.go:126] Scanning /home/jenkins/minikube-integration/17102-5717/.minikube/files for local assets ...
	I0821 11:05:09.232925  166935 filesync.go:149] local asset: /home/jenkins/minikube-integration/17102-5717/.minikube/files/etc/ssl/certs/124602.pem -> 124602.pem in /etc/ssl/certs
	I0821 11:05:09.233039  166935 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0821 11:05:09.238652  166935 cache.go:157] /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I0821 11:05:09.238673  166935 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 1.608894949s
	I0821 11:05:09.238685  166935 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I0821 11:05:09.244543  166935 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/files/etc/ssl/certs/124602.pem --> /etc/ssl/certs/124602.pem (1708 bytes)
	I0821 11:05:09.273577  166935 start.go:303] post-start completed in 160.906943ms
	I0821 11:05:09.273659  166935 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0821 11:05:09.273707  166935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-619999
	I0821 11:05:09.298262  166935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32925 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/running-upgrade-619999/id_rsa Username:docker}
	I0821 11:05:09.384859  166935 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0821 11:05:09.390040  166935 fix.go:56] fixHost completed within 1.759944189s
	I0821 11:05:09.390066  166935 start.go:83] releasing machines lock for "running-upgrade-619999", held for 1.75998792s
	I0821 11:05:09.390133  166935 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-619999
	I0821 11:05:09.408447  166935 ssh_runner.go:195] Run: cat /version.json
	I0821 11:05:09.408490  166935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-619999
	I0821 11:05:09.408614  166935 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0821 11:05:09.408704  166935 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-619999
	I0821 11:05:09.425419  166935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32925 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/running-upgrade-619999/id_rsa Username:docker}
	I0821 11:05:09.432296  166935 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32925 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/running-upgrade-619999/id_rsa Username:docker}
	W0821 11:05:09.548808  166935 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0821 11:05:09.725072  166935 cache.go:157] /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I0821 11:05:09.725105  166935 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 2.095229102s
	I0821 11:05:09.725121  166935 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I0821 11:05:10.088614  166935 cache.go:157] /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0821 11:05:10.088647  166935 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 2.458795955s
	I0821 11:05:10.088660  166935 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0821 11:05:10.088675  166935 cache.go:87] Successfully saved all images to host disk.
	I0821 11:05:10.088726  166935 ssh_runner.go:195] Run: systemctl --version
	I0821 11:05:10.092667  166935 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0821 11:05:10.146282  166935 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0821 11:05:10.150513  166935 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0821 11:05:10.226245  166935 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0821 11:05:10.226347  166935 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0821 11:05:10.391504  166935 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0821 11:05:10.391529  166935 start.go:466] detecting cgroup driver to use...
	I0821 11:05:10.391565  166935 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0821 11:05:10.391614  166935 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0821 11:05:10.416373  166935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0821 11:05:10.426082  166935 docker.go:196] disabling cri-docker service (if available) ...
	I0821 11:05:10.426158  166935 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0821 11:05:10.436457  166935 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0821 11:05:10.446357  166935 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0821 11:05:10.456422  166935 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0821 11:05:10.456478  166935 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0821 11:05:10.528184  166935 docker.go:212] disabling docker service ...
	I0821 11:05:10.528247  166935 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0821 11:05:10.538961  166935 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0821 11:05:10.548493  166935 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0821 11:05:10.626429  166935 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0821 11:05:10.699339  166935 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0821 11:05:10.708987  166935 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0821 11:05:10.758445  166935 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0821 11:05:10.758506  166935 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 11:05:10.894286  166935 out.go:177] 
	W0821 11:05:10.914892  166935 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0821 11:05:10.914917  166935 out.go:239] * 
	* 
	W0821 11:05:10.916255  166935 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0821 11:05:10.997724  166935 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:144: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-619999 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-08-21 11:05:11.142871485 +0000 UTC m=+1910.762508942
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-619999
helpers_test.go:235: (dbg) docker inspect running-upgrade-619999:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "09d97104abbd55188633f82d595fe52fba5e1eb1565c166e6a3380183084c3aa",
	        "Created": "2023-08-21T11:04:00.059124102Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 146180,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-21T11:04:00.46181427Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/09d97104abbd55188633f82d595fe52fba5e1eb1565c166e6a3380183084c3aa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/09d97104abbd55188633f82d595fe52fba5e1eb1565c166e6a3380183084c3aa/hostname",
	        "HostsPath": "/var/lib/docker/containers/09d97104abbd55188633f82d595fe52fba5e1eb1565c166e6a3380183084c3aa/hosts",
	        "LogPath": "/var/lib/docker/containers/09d97104abbd55188633f82d595fe52fba5e1eb1565c166e6a3380183084c3aa/09d97104abbd55188633f82d595fe52fba5e1eb1565c166e6a3380183084c3aa-json.log",
	        "Name": "/running-upgrade-619999",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-619999:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f0bbefe4c57c440bfd73172a938112d80b9e2e81defc7c355e698f5fc61afafc-init/diff:/var/lib/docker/overlay2/61fb91670f72cd1d7334ced5c5180b69a0df4018286d18dc4f98d1df707441da/diff:/var/lib/docker/overlay2/1525209e4b14b9822603bfe707524c9cf5650c3ec4983e427252c8652ac31a86/diff:/var/lib/docker/overlay2/81e8844c02e9dd97667376ec8210ab7fca40fe0bff8f9a909553447e6843edb0/diff:/var/lib/docker/overlay2/b4c4aa4680da7bd7f970d8419062b1d0e075d9e870ceb73a44d0059835a8a61e/diff:/var/lib/docker/overlay2/5518001ebf6f36cc6d71119c757f800ed67ab5b33cca0d45b6790929a6dcb1c0/diff:/var/lib/docker/overlay2/324531cce81b43403b626db160789cf29e224bd8905479c36b58133a7d26b854/diff:/var/lib/docker/overlay2/6159740d62d16878c92dbfdd2d1bc7cb5c7ab80ce7e5c28223c722710d73fbc4/diff:/var/lib/docker/overlay2/14dbeadcf3f6c9006fa92a048908c4781a8d3c9bb12ac394107b6dc33d7c615a/diff:/var/lib/docker/overlay2/b0b876e4a87a7276786e6be3a104392fd89a7f80f8389ed54492a24499391942/diff:/var/lib/docker/overlay2/f64c12
54571857bc9bb81ebba2cbc7c78838ae03420504559ea51ebbe3f7eec4/diff:/var/lib/docker/overlay2/e835f891af4bfada68e089bd3704cc1f64c0e74fef7b506c572b5daa823839cf/diff:/var/lib/docker/overlay2/e464a84211f4f455bf1de8958654e9d185cc5d2db42f6f5d629418cf68fc9819/diff:/var/lib/docker/overlay2/4d8be6a1cd18fa445d50c904a0c36a6c61b45f3da45270e09bf745f2f59ddb6c/diff:/var/lib/docker/overlay2/c4763b7837b940ce3f0c2c29408369473fbbe887d60d02786c89e7aa1ee111cc/diff:/var/lib/docker/overlay2/daae8fdfd4e87feda987d1cec603835101bdb4c6923ffb71940b51699bfba2f1/diff:/var/lib/docker/overlay2/d3b98533cf3241ec002e131d3995097199f4595b8c99e721c436586f5b7c2b80/diff:/var/lib/docker/overlay2/40d7333512718f6d258b3364de0a4ec61564d2ebb017af42a351e5f1b496141c/diff:/var/lib/docker/overlay2/61ea15e2560cb284475e1504bc114f5c0a22d79e8587d20dfa8c14d97c8ea60d/diff:/var/lib/docker/overlay2/4b9591234a7a5d04e35968e75116e7d7e6c73012d9d2c03aefc1e21a85db4b4f/diff:/var/lib/docker/overlay2/d217ee02f33d2121ea12df12ee939dfea72426ec134ddb066b260af57c4c8e66/diff:/var/lib/d
ocker/overlay2/bad1e34e2f523a4a79ba6f7df647592ceece1bf3171caa44eb934719423c6c0d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f0bbefe4c57c440bfd73172a938112d80b9e2e81defc7c355e698f5fc61afafc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f0bbefe4c57c440bfd73172a938112d80b9e2e81defc7c355e698f5fc61afafc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f0bbefe4c57c440bfd73172a938112d80b9e2e81defc7c355e698f5fc61afafc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-619999",
	                "Source": "/var/lib/docker/volumes/running-upgrade-619999/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-619999",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-619999",
	                "name.minikube.sigs.k8s.io": "running-upgrade-619999",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dc88b35fc0e9254a0f0fab232deed02b62830679de15a85a5cd6936b19acca42",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32925"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32924"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32923"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/dc88b35fc0e9",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "fc63150c6e85e97c6a84366a6e3f38d06bffb4ed2aa7b446b59d7e7d11e05047",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "aff2e99839a2867f403750582c85cdb6461b5180e83fb30f3d46d9f2cb8ea0c8",
	                    "EndpointID": "fc63150c6e85e97c6a84366a6e3f38d06bffb4ed2aa7b446b59d7e7d11e05047",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-619999 -n running-upgrade-619999
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-619999 -n running-upgrade-619999: exit status 4 (292.599332ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0821 11:05:11.425382  168300 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-619999" does not appear in /home/jenkins/minikube-integration/17102-5717/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-619999" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-619999" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-619999
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-619999: (2.37977265s)
--- FAIL: TestRunningBinaryUpgrade (91.90s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (46.29s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-942142 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-942142 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (41.504806384s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-942142] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17102
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17102-5717/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-5717/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node pause-942142 in cluster pause-942142
	* Pulling base image ...
	* Updating the running docker "pause-942142" container ...
	* Preparing Kubernetes v1.27.4 on CRI-O 1.24.6 ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-942142" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 11:06:23.924338  188079 out.go:296] Setting OutFile to fd 1 ...
	I0821 11:06:23.924463  188079 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:06:23.924473  188079 out.go:309] Setting ErrFile to fd 2...
	I0821 11:06:23.924479  188079 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:06:23.924736  188079 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-5717/.minikube/bin
	I0821 11:06:23.925343  188079 out.go:303] Setting JSON to false
	I0821 11:06:23.926727  188079 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2934,"bootTime":1692613050,"procs":477,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0821 11:06:23.926811  188079 start.go:138] virtualization: kvm guest
	I0821 11:06:23.929511  188079 out.go:177] * [pause-942142] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0821 11:06:23.931274  188079 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 11:06:23.932553  188079 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 11:06:23.931320  188079 notify.go:220] Checking for updates...
	I0821 11:06:23.935431  188079 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17102-5717/kubeconfig
	I0821 11:06:23.936774  188079 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-5717/.minikube
	I0821 11:06:23.938187  188079 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0821 11:06:23.939458  188079 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 11:06:23.941209  188079 config.go:182] Loaded profile config "pause-942142": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0821 11:06:23.942606  188079 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 11:06:23.966811  188079 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0821 11:06:23.966901  188079 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 11:06:24.024734  188079 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:true NGoroutines:80 SystemTime:2023-08-21 11:06:24.016278179 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0821 11:06:24.024834  188079 docker.go:294] overlay module found
	I0821 11:06:24.026990  188079 out.go:177] * Using the docker driver based on existing profile
	I0821 11:06:24.028488  188079 start.go:298] selected driver: docker
	I0821 11:06:24.028507  188079 start.go:902] validating driver "docker" against &{Name:pause-942142 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:pause-942142 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provi
sioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 11:06:24.028621  188079 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 11:06:24.028705  188079 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 11:06:24.089124  188079 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:true NGoroutines:80 SystemTime:2023-08-21 11:06:24.079933166 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0821 11:06:24.089859  188079 cni.go:84] Creating CNI manager for ""
	I0821 11:06:24.089886  188079 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0821 11:06:24.089900  188079 start_flags.go:319] config:
	{Name:pause-942142 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:pause-942142 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni
FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddo
nImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 11:06:24.092186  188079 out.go:177] * Starting control plane node pause-942142 in cluster pause-942142
	I0821 11:06:24.093577  188079 cache.go:122] Beginning downloading kic base image for docker with crio
	I0821 11:06:24.094936  188079 out.go:177] * Pulling base image ...
	I0821 11:06:24.096362  188079 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0821 11:06:24.096400  188079 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17102-5717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4
	I0821 11:06:24.096422  188079 cache.go:57] Caching tarball of preloaded images
	I0821 11:06:24.096458  188079 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0821 11:06:24.096490  188079 preload.go:174] Found /home/jenkins/minikube-integration/17102-5717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0821 11:06:24.096501  188079 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on crio
	I0821 11:06:24.096629  188079 profile.go:148] Saving config to /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/pause-942142/config.json ...
	I0821 11:06:24.113493  188079 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0821 11:06:24.113541  188079 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0821 11:06:24.113570  188079 cache.go:195] Successfully downloaded all kic artifacts
	I0821 11:06:24.113616  188079 start.go:365] acquiring machines lock for pause-942142: {Name:mka067822488f2e79a22e41acf5e2a12368c2e19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:06:24.113695  188079 start.go:369] acquired machines lock for "pause-942142" in 50.034µs
	I0821 11:06:24.113718  188079 start.go:96] Skipping create...Using existing machine configuration
	I0821 11:06:24.113728  188079 fix.go:54] fixHost starting: 
	I0821 11:06:24.114009  188079 cli_runner.go:164] Run: docker container inspect pause-942142 --format={{.State.Status}}
	I0821 11:06:24.131141  188079 fix.go:102] recreateIfNeeded on pause-942142: state=Running err=<nil>
	W0821 11:06:24.131187  188079 fix.go:128] unexpected machine state, will restart: <nil>
	I0821 11:06:24.132733  188079 out.go:177] * Updating the running docker "pause-942142" container ...
	I0821 11:06:24.134720  188079 machine.go:88] provisioning docker machine ...
	I0821 11:06:24.134758  188079 ubuntu.go:169] provisioning hostname "pause-942142"
	I0821 11:06:24.134834  188079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-942142
	I0821 11:06:24.152080  188079 main.go:141] libmachine: Using SSH client type: native
	I0821 11:06:24.152526  188079 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 127.0.0.1 32968 <nil> <nil>}
	I0821 11:06:24.152543  188079 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-942142 && echo "pause-942142" | sudo tee /etc/hostname
	I0821 11:06:24.398911  188079 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-942142
	
	I0821 11:06:24.398995  188079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-942142
	I0821 11:06:24.416367  188079 main.go:141] libmachine: Using SSH client type: native
	I0821 11:06:24.416879  188079 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 127.0.0.1 32968 <nil> <nil>}
	I0821 11:06:24.416913  188079 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-942142' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-942142/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-942142' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0821 11:06:24.543824  188079 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0821 11:06:24.543854  188079 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17102-5717/.minikube CaCertPath:/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17102-5717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17102-5717/.minikube}
	I0821 11:06:24.543891  188079 ubuntu.go:177] setting up certificates
	I0821 11:06:24.543908  188079 provision.go:83] configureAuth start
	I0821 11:06:24.543965  188079 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-942142
	I0821 11:06:24.561983  188079 provision.go:138] copyHostCerts
	I0821 11:06:24.562043  188079 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-5717/.minikube/key.pem, removing ...
	I0821 11:06:24.562064  188079 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-5717/.minikube/key.pem
	I0821 11:06:24.562183  188079 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17102-5717/.minikube/key.pem (1675 bytes)
	I0821 11:06:24.562313  188079 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-5717/.minikube/ca.pem, removing ...
	I0821 11:06:24.562326  188079 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-5717/.minikube/ca.pem
	I0821 11:06:24.562363  188079 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17102-5717/.minikube/ca.pem (1078 bytes)
	I0821 11:06:24.562451  188079 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-5717/.minikube/cert.pem, removing ...
	I0821 11:06:24.562456  188079 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-5717/.minikube/cert.pem
	I0821 11:06:24.562489  188079 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17102-5717/.minikube/cert.pem (1123 bytes)
	I0821 11:06:24.562565  188079 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17102-5717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca-key.pem org=jenkins.pause-942142 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube pause-942142]
	I0821 11:06:24.646249  188079 provision.go:172] copyRemoteCerts
	I0821 11:06:24.646321  188079 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0821 11:06:24.646359  188079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-942142
	I0821 11:06:24.662193  188079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/pause-942142/id_rsa Username:docker}
	I0821 11:06:24.763536  188079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0821 11:06:24.795101  188079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0821 11:06:24.816815  188079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0821 11:06:24.839286  188079 provision.go:86] duration metric: configureAuth took 295.359214ms
	I0821 11:06:24.839316  188079 ubuntu.go:193] setting minikube options for container-runtime
	I0821 11:06:24.839645  188079 config.go:182] Loaded profile config "pause-942142": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0821 11:06:24.839778  188079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-942142
	I0821 11:06:24.867647  188079 main.go:141] libmachine: Using SSH client type: native
	I0821 11:06:24.868195  188079 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 127.0.0.1 32968 <nil> <nil>}
	I0821 11:06:24.868213  188079 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0821 11:06:30.259130  188079 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0821 11:06:30.259158  188079 machine.go:91] provisioned docker machine in 6.124419627s
	I0821 11:06:30.259168  188079 start.go:300] post-start starting for "pause-942142" (driver="docker")
	I0821 11:06:30.259180  188079 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0821 11:06:30.259257  188079 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0821 11:06:30.259302  188079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-942142
	I0821 11:06:30.280589  188079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/pause-942142/id_rsa Username:docker}
	I0821 11:06:30.378965  188079 ssh_runner.go:195] Run: cat /etc/os-release
	I0821 11:06:30.382860  188079 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0821 11:06:30.382910  188079 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0821 11:06:30.382924  188079 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0821 11:06:30.382930  188079 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0821 11:06:30.382939  188079 filesync.go:126] Scanning /home/jenkins/minikube-integration/17102-5717/.minikube/addons for local assets ...
	I0821 11:06:30.382978  188079 filesync.go:126] Scanning /home/jenkins/minikube-integration/17102-5717/.minikube/files for local assets ...
	I0821 11:06:30.383040  188079 filesync.go:149] local asset: /home/jenkins/minikube-integration/17102-5717/.minikube/files/etc/ssl/certs/124602.pem -> 124602.pem in /etc/ssl/certs
	I0821 11:06:30.383116  188079 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0821 11:06:30.399888  188079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/files/etc/ssl/certs/124602.pem --> /etc/ssl/certs/124602.pem (1708 bytes)
	I0821 11:06:30.423089  188079 start.go:303] post-start completed in 163.90796ms
	I0821 11:06:30.423164  188079 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0821 11:06:30.423207  188079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-942142
	I0821 11:06:30.444944  188079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/pause-942142/id_rsa Username:docker}
	I0821 11:06:30.544849  188079 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0821 11:06:30.549522  188079 fix.go:56] fixHost completed within 6.435789256s
	I0821 11:06:30.549546  188079 start.go:83] releasing machines lock for "pause-942142", held for 6.435834915s
	I0821 11:06:30.549613  188079 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-942142
	I0821 11:06:30.569356  188079 ssh_runner.go:195] Run: cat /version.json
	I0821 11:06:30.569419  188079 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0821 11:06:30.569449  188079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-942142
	I0821 11:06:30.569498  188079 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-942142
	I0821 11:06:30.590018  188079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/pause-942142/id_rsa Username:docker}
	I0821 11:06:30.592270  188079 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32968 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/pause-942142/id_rsa Username:docker}
	I0821 11:06:30.784309  188079 ssh_runner.go:195] Run: systemctl --version
	I0821 11:06:30.788395  188079 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0821 11:06:30.925123  188079 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0821 11:06:30.929357  188079 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0821 11:06:30.937312  188079 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0821 11:06:30.937381  188079 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0821 11:06:30.945440  188079 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0821 11:06:30.945462  188079 start.go:466] detecting cgroup driver to use...
	I0821 11:06:30.945497  188079 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0821 11:06:30.945535  188079 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0821 11:06:30.956945  188079 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0821 11:06:30.968809  188079 docker.go:196] disabling cri-docker service (if available) ...
	I0821 11:06:30.968873  188079 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0821 11:06:30.982215  188079 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0821 11:06:30.993595  188079 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0821 11:06:31.119550  188079 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0821 11:06:31.241712  188079 docker.go:212] disabling docker service ...
	I0821 11:06:31.241781  188079 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0821 11:06:31.254346  188079 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0821 11:06:31.266508  188079 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0821 11:06:31.387280  188079 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0821 11:06:31.509638  188079 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0821 11:06:31.522205  188079 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0821 11:06:31.543044  188079 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0821 11:06:31.543108  188079 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 11:06:31.554790  188079 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0821 11:06:31.554882  188079 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 11:06:31.571643  188079 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 11:06:31.586063  188079 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 11:06:31.598961  188079 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0821 11:06:31.646384  188079 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0821 11:06:31.657489  188079 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0821 11:06:31.670228  188079 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 11:06:32.257543  188079 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0821 11:06:42.397986  188079 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.140410132s)
	I0821 11:06:42.398012  188079 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0821 11:06:42.398061  188079 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0821 11:06:42.401568  188079 start.go:534] Will wait 60s for crictl version
	I0821 11:06:42.401658  188079 ssh_runner.go:195] Run: which crictl
	I0821 11:06:42.405195  188079 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0821 11:06:42.439745  188079 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0821 11:06:42.439813  188079 ssh_runner.go:195] Run: crio --version
	I0821 11:06:42.475106  188079 ssh_runner.go:195] Run: crio --version
	I0821 11:06:42.513643  188079 out.go:177] * Preparing Kubernetes v1.27.4 on CRI-O 1.24.6 ...
	I0821 11:06:42.514959  188079 cli_runner.go:164] Run: docker network inspect pause-942142 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0821 11:06:42.533340  188079 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0821 11:06:42.537284  188079 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0821 11:06:42.537351  188079 ssh_runner.go:195] Run: sudo crictl images --output json
	I0821 11:06:42.577323  188079 crio.go:496] all images are preloaded for cri-o runtime.
	I0821 11:06:42.577359  188079 crio.go:415] Images already preloaded, skipping extraction
	I0821 11:06:42.577424  188079 ssh_runner.go:195] Run: sudo crictl images --output json
	I0821 11:06:42.610040  188079 crio.go:496] all images are preloaded for cri-o runtime.
	I0821 11:06:42.610061  188079 cache_images.go:84] Images are preloaded, skipping loading
	I0821 11:06:42.610110  188079 ssh_runner.go:195] Run: crio config
	I0821 11:06:42.662582  188079 cni.go:84] Creating CNI manager for ""
	I0821 11:06:42.662604  188079 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0821 11:06:42.662619  188079 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0821 11:06:42.662635  188079 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-942142 NodeName:pause-942142 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0821 11:06:42.662782  188079 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-942142"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0821 11:06:42.662852  188079 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=pause-942142 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:pause-942142 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0821 11:06:42.662898  188079 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0821 11:06:42.672262  188079 binaries.go:44] Found k8s binaries, skipping transfer
	I0821 11:06:42.672330  188079 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0821 11:06:42.681603  188079 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (422 bytes)
	I0821 11:06:42.699688  188079 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0821 11:06:42.716883  188079 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
	I0821 11:06:42.734166  188079 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0821 11:06:42.737486  188079 certs.go:56] Setting up /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/pause-942142 for IP: 192.168.76.2
	I0821 11:06:42.737517  188079 certs.go:190] acquiring lock for shared ca certs: {Name:mkb88db7eb1befc1f1b3279575458c71b2313cc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 11:06:42.737668  188079 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17102-5717/.minikube/ca.key
	I0821 11:06:42.737710  188079 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17102-5717/.minikube/proxy-client-ca.key
	I0821 11:06:42.737776  188079 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/pause-942142/client.key
	I0821 11:06:42.737827  188079 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/pause-942142/apiserver.key.31bdca25
	I0821 11:06:42.737861  188079 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/pause-942142/proxy-client.key
	I0821 11:06:42.737996  188079 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/home/jenkins/minikube-integration/17102-5717/.minikube/certs/12460.pem (1338 bytes)
	W0821 11:06:42.738026  188079 certs.go:433] ignoring /home/jenkins/minikube-integration/17102-5717/.minikube/certs/home/jenkins/minikube-integration/17102-5717/.minikube/certs/12460_empty.pem, impossibly tiny 0 bytes
	I0821 11:06:42.738036  188079 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca-key.pem (1679 bytes)
	I0821 11:06:42.738056  188079 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem (1078 bytes)
	I0821 11:06:42.738080  188079 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/home/jenkins/minikube-integration/17102-5717/.minikube/certs/cert.pem (1123 bytes)
	I0821 11:06:42.738101  188079 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/home/jenkins/minikube-integration/17102-5717/.minikube/certs/key.pem (1675 bytes)
	I0821 11:06:42.738142  188079 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-5717/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17102-5717/.minikube/files/etc/ssl/certs/124602.pem (1708 bytes)
	I0821 11:06:42.738775  188079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/pause-942142/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0821 11:06:42.761060  188079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/pause-942142/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0821 11:06:42.782701  188079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/pause-942142/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0821 11:06:42.805163  188079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/pause-942142/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0821 11:06:42.828783  188079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0821 11:06:42.850909  188079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0821 11:06:42.872510  188079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0821 11:06:42.894221  188079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0821 11:06:42.917382  188079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/certs/12460.pem --> /usr/share/ca-certificates/12460.pem (1338 bytes)
	I0821 11:06:42.971967  188079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/files/etc/ssl/certs/124602.pem --> /usr/share/ca-certificates/124602.pem (1708 bytes)
	I0821 11:06:42.997366  188079 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0821 11:06:43.018554  188079 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0821 11:06:43.035298  188079 ssh_runner.go:195] Run: openssl version
	I0821 11:06:43.040098  188079 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/124602.pem && ln -fs /usr/share/ca-certificates/124602.pem /etc/ssl/certs/124602.pem"
	I0821 11:06:43.048270  188079 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/124602.pem
	I0821 11:06:43.051161  188079 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 21 10:39 /usr/share/ca-certificates/124602.pem
	I0821 11:06:43.051201  188079 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/124602.pem
	I0821 11:06:43.057063  188079 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/124602.pem /etc/ssl/certs/3ec20f2e.0"
	I0821 11:06:43.064996  188079 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0821 11:06:43.073414  188079 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0821 11:06:43.076744  188079 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 21 10:34 /usr/share/ca-certificates/minikubeCA.pem
	I0821 11:06:43.076798  188079 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0821 11:06:43.082784  188079 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0821 11:06:43.090713  188079 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12460.pem && ln -fs /usr/share/ca-certificates/12460.pem /etc/ssl/certs/12460.pem"
	I0821 11:06:43.099286  188079 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12460.pem
	I0821 11:06:43.102488  188079 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 21 10:39 /usr/share/ca-certificates/12460.pem
	I0821 11:06:43.102537  188079 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12460.pem
	I0821 11:06:43.109123  188079 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12460.pem /etc/ssl/certs/51391683.0"
	I0821 11:06:43.117271  188079 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0821 11:06:43.120416  188079 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0821 11:06:43.126467  188079 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0821 11:06:43.132321  188079 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0821 11:06:43.138391  188079 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0821 11:06:43.144287  188079 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0821 11:06:43.150144  188079 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0821 11:06:43.156239  188079 kubeadm.go:404] StartCluster: {Name:pause-942142 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:pause-942142 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage
-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 11:06:43.156344  188079 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0821 11:06:43.156388  188079 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0821 11:06:43.189989  188079 cri.go:89] found id: "327d0fb7dbbc50c0b1c66ffd711df33c2e0471a307c78992fe70be174ef461bd"
	I0821 11:06:43.190009  188079 cri.go:89] found id: "9581f7f8ccbc19ef0ec935172c03cd5035fb7a36198d2f218807299959d3a846"
	I0821 11:06:43.190013  188079 cri.go:89] found id: "05325b9d295d4de54fc9eee30cdda179eed8761e957e48fc795f9c62de082588"
	I0821 11:06:43.190016  188079 cri.go:89] found id: "179d2ba79aac88d8d5a28cf3b3f3c792df2868d30b9d3e76e99879a3547221e3"
	I0821 11:06:43.190020  188079 cri.go:89] found id: "0828dc2ced588a98499aeca666b45dadc7a01c60197d187af4790a70e968ac60"
	I0821 11:06:43.190024  188079 cri.go:89] found id: "fbe17bd915d26206d4070a22a06d468687f9e9ac3c757f3682e5252af105dba4"
	I0821 11:06:43.190027  188079 cri.go:89] found id: "31d22f771e57016d722ad1604c45f8d36af9b4ef37787395b579eff24cc280d8"
	I0821 11:06:43.190030  188079 cri.go:89] found id: "9e6691b39ff4244c52bee2ef0e9a98f32e3aee856e59421957e8d0ce5ad633a5"
	I0821 11:06:43.190033  188079 cri.go:89] found id: "1c3b689b1b2a0d13c68971a905d11ff217a25648fc16cf416888c5d78e386f07"
	I0821 11:06:43.190042  188079 cri.go:89] found id: "b3ff645935a752a5ba5a21c67f36efee2fb2475db58e9960b1c77825a081eedf"
	I0821 11:06:43.190051  188079 cri.go:89] found id: "ef09b7eb4707d223ba1f602fb7fab739839118125d9ce921bed2bce1b9aa3b70"
	I0821 11:06:43.190060  188079 cri.go:89] found id: "a27c4cf913b18c91abd4ee0f50f86f125c5483a7810d874b75009a359af14bad"
	I0821 11:06:43.190067  188079 cri.go:89] found id: "c9a7dc8a1abbf51ef6bfa173fd7f0c4d33f7de591121a3e3dff201228180c88e"
	I0821 11:06:43.190075  188079 cri.go:89] found id: "412a617b6f397703919a861a02f00109ef6540e2802dc4f1999d5ec37f755813"
	I0821 11:06:43.190086  188079 cri.go:89] found id: "d6e073418281aeac0de2fe83f3ec9c4628e6b21045b247d27d79e3932634a8c4"
	I0821 11:06:43.190092  188079 cri.go:89] found id: "e54946981e64d13a71635a59d35d5b82c350dcd4844893d298edf4d888a69179"
	I0821 11:06:43.190096  188079 cri.go:89] found id: ""
	I0821 11:06:43.190141  188079 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-942142
helpers_test.go:235: (dbg) docker inspect pause-942142:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "309473feabb5ddf8e1b31a85fe1fa26965aae266f6a8b3dfd5daa9ce19790bd4",
	        "Created": "2023-08-21T11:05:44.569615649Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 179454,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-21T11:05:44.915550528Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/309473feabb5ddf8e1b31a85fe1fa26965aae266f6a8b3dfd5daa9ce19790bd4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/309473feabb5ddf8e1b31a85fe1fa26965aae266f6a8b3dfd5daa9ce19790bd4/hostname",
	        "HostsPath": "/var/lib/docker/containers/309473feabb5ddf8e1b31a85fe1fa26965aae266f6a8b3dfd5daa9ce19790bd4/hosts",
	        "LogPath": "/var/lib/docker/containers/309473feabb5ddf8e1b31a85fe1fa26965aae266f6a8b3dfd5daa9ce19790bd4/309473feabb5ddf8e1b31a85fe1fa26965aae266f6a8b3dfd5daa9ce19790bd4-json.log",
	        "Name": "/pause-942142",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-942142:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-942142",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9a3cc4d3a8405a830bbcec04e407d53e79e7d407ce16e9d5fd07cfd156623f5b-init/diff:/var/lib/docker/overlay2/524bb0f129210e266d288d085768bab72d4735717d72ebbb4611a7bc558cb4ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9a3cc4d3a8405a830bbcec04e407d53e79e7d407ce16e9d5fd07cfd156623f5b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9a3cc4d3a8405a830bbcec04e407d53e79e7d407ce16e9d5fd07cfd156623f5b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9a3cc4d3a8405a830bbcec04e407d53e79e7d407ce16e9d5fd07cfd156623f5b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-942142",
	                "Source": "/var/lib/docker/volumes/pause-942142/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-942142",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-942142",
	                "name.minikube.sigs.k8s.io": "pause-942142",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f05339805d36552a8e8c1b37f26b9a02c31c5546b3dbecc6d89c547ec78d1516",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32968"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32967"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32964"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32966"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32965"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f05339805d36",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-942142": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "309473feabb5",
	                        "pause-942142"
	                    ],
	                    "NetworkID": "eb70cca5b08cbac86675674e217e1a9ed987f2b066a7cdfffc38ae2efe8409e3",
	                    "EndpointID": "77560130dd4bc75ee71659f96e31d8e0587e361c4efe1e988602596637e1ceda",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-942142 -n pause-942142
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-942142 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-942142 logs -n 25: (1.60235334s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-872088 sudo                 | cilium-872088             | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC |                     |
	|         | containerd config dump                |                           |         |         |                     |                     |
	| ssh     | -p cilium-872088 sudo                 | cilium-872088             | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC |                     |
	|         | systemctl status crio --all           |                           |         |         |                     |                     |
	|         | --full --no-pager                     |                           |         |         |                     |                     |
	| ssh     | -p cilium-872088 sudo                 | cilium-872088             | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC |                     |
	|         | systemctl cat crio --no-pager         |                           |         |         |                     |                     |
	| ssh     | -p cilium-872088 sudo find            | cilium-872088             | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC |                     |
	|         | /etc/crio -type f -exec sh -c         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-872088 sudo crio            | cilium-872088             | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC |                     |
	|         | config                                |                           |         |         |                     |                     |
	| delete  | -p cilium-872088                      | cilium-872088             | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC | 21 Aug 23 11:05 UTC |
	| start   | -p force-systemd-env-121880           | force-systemd-env-121880  | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC | 21 Aug 23 11:05 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p force-systemd-flag-577578          | force-systemd-flag-577578 | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC | 21 Aug 23 11:05 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p running-upgrade-619999             | running-upgrade-619999    | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-619999             | running-upgrade-619999    | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC | 21 Aug 23 11:05 UTC |
	| start   | -p cert-expiration-650157             | cert-expiration-650157    | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC | 21 Aug 23 11:05 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p missing-upgrade-586789             | missing-upgrade-586789    | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC | 21 Aug 23 11:06 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-121880           | force-systemd-env-121880  | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC | 21 Aug 23 11:05 UTC |
	| start   | -p cert-options-400386                | cert-options-400386       | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC | 21 Aug 23 11:05 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-577578 ssh cat     | force-systemd-flag-577578 | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC | 21 Aug 23 11:05 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-577578          | force-systemd-flag-577578 | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC | 21 Aug 23 11:05 UTC |
	| start   | -p pause-942142 --memory=2048         | pause-942142              | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC | 21 Aug 23 11:06 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=docker            |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-400386 ssh               | cert-options-400386       | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC | 21 Aug 23 11:05 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-400386 -- sudo        | cert-options-400386       | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC | 21 Aug 23 11:05 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-400386                | cert-options-400386       | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC | 21 Aug 23 11:06 UTC |
	| start   | -p kubernetes-upgrade-433377          | kubernetes-upgrade-433377 | jenkins | v1.31.2 | 21 Aug 23 11:06 UTC | 21 Aug 23 11:06 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p missing-upgrade-586789             | missing-upgrade-586789    | jenkins | v1.31.2 | 21 Aug 23 11:06 UTC | 21 Aug 23 11:06 UTC |
	| start   | -p pause-942142                       | pause-942142              | jenkins | v1.31.2 | 21 Aug 23 11:06 UTC | 21 Aug 23 11:07 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-433377          | kubernetes-upgrade-433377 | jenkins | v1.31.2 | 21 Aug 23 11:06 UTC | 21 Aug 23 11:06 UTC |
	| start   | -p kubernetes-upgrade-433377          | kubernetes-upgrade-433377 | jenkins | v1.31.2 | 21 Aug 23 11:06 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0-rc.1     |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/21 11:06:56
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0821 11:06:56.288788  194386 out.go:296] Setting OutFile to fd 1 ...
	I0821 11:06:56.288901  194386 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:06:56.288911  194386 out.go:309] Setting ErrFile to fd 2...
	I0821 11:06:56.288915  194386 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:06:56.289128  194386 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-5717/.minikube/bin
	I0821 11:06:56.289718  194386 out.go:303] Setting JSON to false
	I0821 11:06:56.291323  194386 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2966,"bootTime":1692613050,"procs":694,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0821 11:06:56.291417  194386 start.go:138] virtualization: kvm guest
	I0821 11:06:56.294147  194386 out.go:177] * [kubernetes-upgrade-433377] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0821 11:06:56.295775  194386 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 11:06:56.297215  194386 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 11:06:56.295847  194386 notify.go:220] Checking for updates...
	I0821 11:06:56.300944  194386 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17102-5717/kubeconfig
	I0821 11:06:56.302425  194386 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-5717/.minikube
	I0821 11:06:56.303793  194386 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0821 11:06:56.305070  194386 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 11:06:56.306654  194386 config.go:182] Loaded profile config "kubernetes-upgrade-433377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0821 11:06:56.307080  194386 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 11:06:56.329843  194386 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0821 11:06:56.329937  194386 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 11:06:56.387128  194386 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:79 OomKillDisable:true NGoroutines:82 SystemTime:2023-08-21 11:06:56.378036546 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0821 11:06:56.387244  194386 docker.go:294] overlay module found
	I0821 11:06:56.388917  194386 out.go:177] * Using the docker driver based on existing profile
	I0821 11:06:56.390162  194386 start.go:298] selected driver: docker
	I0821 11:06:56.390175  194386 start.go:902] validating driver "docker" against &{Name:kubernetes-upgrade-433377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-433377 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 11:06:56.390260  194386 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 11:06:56.391083  194386 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 11:06:56.445492  194386 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:79 OomKillDisable:true NGoroutines:82 SystemTime:2023-08-21 11:06:56.436315238 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0821 11:06:56.445831  194386 cni.go:84] Creating CNI manager for ""
	I0821 11:06:56.445850  194386 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0821 11:06:56.445861  194386 start_flags.go:319] config:
	{Name:kubernetes-upgrade-433377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.1 ClusterName:kubernetes-upgrade-433377 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0}
	I0821 11:06:56.447679  194386 out.go:177] * Starting control plane node kubernetes-upgrade-433377 in cluster kubernetes-upgrade-433377
	I0821 11:06:56.448916  194386 cache.go:122] Beginning downloading kic base image for docker with crio
	I0821 11:06:56.450171  194386 out.go:177] * Pulling base image ...
	I0821 11:06:56.451519  194386 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.1 and runtime crio
	I0821 11:06:56.451583  194386 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17102-5717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I0821 11:06:56.451610  194386 cache.go:57] Caching tarball of preloaded images
	I0821 11:06:56.451625  194386 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0821 11:06:56.451701  194386 preload.go:174] Found /home/jenkins/minikube-integration/17102-5717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0821 11:06:56.451716  194386 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0-rc.1 on crio
	I0821 11:06:56.451858  194386 profile.go:148] Saving config to /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/kubernetes-upgrade-433377/config.json ...
	I0821 11:06:56.469201  194386 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0821 11:06:56.469231  194386 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0821 11:06:56.469255  194386 cache.go:195] Successfully downloaded all kic artifacts
	I0821 11:06:56.469314  194386 start.go:365] acquiring machines lock for kubernetes-upgrade-433377: {Name:mk2296809b3f2eb8da8eba0f1ea9549353ccf3bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:06:56.469402  194386 start.go:369] acquired machines lock for "kubernetes-upgrade-433377" in 53.68µs
	I0821 11:06:56.469428  194386 start.go:96] Skipping create...Using existing machine configuration
	I0821 11:06:56.469442  194386 fix.go:54] fixHost starting: 
	I0821 11:06:56.469766  194386 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-433377 --format={{.State.Status}}
	I0821 11:06:56.486134  194386 fix.go:102] recreateIfNeeded on kubernetes-upgrade-433377: state=Stopped err=<nil>
	W0821 11:06:56.486167  194386 fix.go:128] unexpected machine state, will restart: <nil>
	I0821 11:06:56.488038  194386 out.go:177] * Restarting existing docker container for "kubernetes-upgrade-433377" ...
	I0821 11:06:54.659328  188079 pod_ready.go:102] pod "coredns-5d78c9869d-cw6gk" in "kube-system" namespace has status "Ready":"False"
	I0821 11:06:57.070002  188079 pod_ready.go:102] pod "coredns-5d78c9869d-cw6gk" in "kube-system" namespace has status "Ready":"False"
	I0821 11:06:56.489473  194386 cli_runner.go:164] Run: docker start kubernetes-upgrade-433377
	I0821 11:06:56.776478  194386 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-433377 --format={{.State.Status}}
	I0821 11:06:56.796212  194386 kic.go:426] container "kubernetes-upgrade-433377" state is running.
	I0821 11:06:56.796697  194386 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-433377
	I0821 11:06:56.815752  194386 profile.go:148] Saving config to /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/kubernetes-upgrade-433377/config.json ...
	I0821 11:06:56.816031  194386 machine.go:88] provisioning docker machine ...
	I0821 11:06:56.816061  194386 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-433377"
	I0821 11:06:56.816125  194386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-433377
	I0821 11:06:56.833482  194386 main.go:141] libmachine: Using SSH client type: native
	I0821 11:06:56.834091  194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 127.0.0.1 32981 <nil> <nil>}
	I0821 11:06:56.834110  194386 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-433377 && echo "kubernetes-upgrade-433377" | sudo tee /etc/hostname
	I0821 11:06:56.834755  194386 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52546->127.0.0.1:32981: read: connection reset by peer
	I0821 11:06:59.978285  194386 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-433377
	
	I0821 11:06:59.978377  194386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-433377
	I0821 11:06:59.996830  194386 main.go:141] libmachine: Using SSH client type: native
	I0821 11:06:59.997237  194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 127.0.0.1 32981 <nil> <nil>}
	I0821 11:06:59.997263  194386 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-433377' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-433377/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-433377' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0821 11:07:00.127678  194386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0821 11:07:00.127709  194386 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17102-5717/.minikube CaCertPath:/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17102-5717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17102-5717/.minikube}
	I0821 11:07:00.127732  194386 ubuntu.go:177] setting up certificates
	I0821 11:07:00.127752  194386 provision.go:83] configureAuth start
	I0821 11:07:00.127805  194386 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-433377
	I0821 11:07:00.145606  194386 provision.go:138] copyHostCerts
	I0821 11:07:00.145675  194386 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-5717/.minikube/key.pem, removing ...
	I0821 11:07:00.145695  194386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-5717/.minikube/key.pem
	I0821 11:07:00.145769  194386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17102-5717/.minikube/key.pem (1675 bytes)
	I0821 11:07:00.145889  194386 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-5717/.minikube/ca.pem, removing ...
	I0821 11:07:00.145901  194386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-5717/.minikube/ca.pem
	I0821 11:07:00.145937  194386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17102-5717/.minikube/ca.pem (1078 bytes)
	I0821 11:07:00.146024  194386 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-5717/.minikube/cert.pem, removing ...
	I0821 11:07:00.146034  194386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-5717/.minikube/cert.pem
	I0821 11:07:00.146073  194386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17102-5717/.minikube/cert.pem (1123 bytes)
	I0821 11:07:00.146159  194386 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17102-5717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-433377 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-433377]
	I0821 11:07:00.356462  194386 provision.go:172] copyRemoteCerts
	I0821 11:07:00.356545  194386 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0821 11:07:00.356592  194386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-433377
	I0821 11:07:00.376709  194386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32981 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/kubernetes-upgrade-433377/id_rsa Username:docker}
	I0821 11:07:00.472312  194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0821 11:07:00.495636  194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0821 11:07:00.517657  194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0821 11:07:00.539891  194386 provision.go:86] duration metric: configureAuth took 412.125455ms
	I0821 11:07:00.539913  194386 ubuntu.go:193] setting minikube options for container-runtime
	I0821 11:07:00.540075  194386 config.go:182] Loaded profile config "kubernetes-upgrade-433377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0-rc.1
	I0821 11:07:00.540163  194386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-433377
	I0821 11:07:00.559595  194386 main.go:141] libmachine: Using SSH client type: native
	I0821 11:07:00.560205  194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 127.0.0.1 32981 <nil> <nil>}
	I0821 11:07:00.560233  194386 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0821 11:07:00.837484  194386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0821 11:07:00.837528  194386 machine.go:91] provisioned docker machine in 4.021480148s
	I0821 11:07:00.837541  194386 start.go:300] post-start starting for "kubernetes-upgrade-433377" (driver="docker")
	I0821 11:07:00.837557  194386 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0821 11:07:00.837638  194386 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0821 11:07:00.837677  194386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-433377
	I0821 11:07:00.856482  194386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32981 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/kubernetes-upgrade-433377/id_rsa Username:docker}
	I0821 11:07:00.948024  194386 ssh_runner.go:195] Run: cat /etc/os-release
	I0821 11:07:00.951490  194386 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0821 11:07:00.951532  194386 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0821 11:07:00.951546  194386 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0821 11:07:00.951554  194386 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0821 11:07:00.951565  194386 filesync.go:126] Scanning /home/jenkins/minikube-integration/17102-5717/.minikube/addons for local assets ...
	I0821 11:07:00.951625  194386 filesync.go:126] Scanning /home/jenkins/minikube-integration/17102-5717/.minikube/files for local assets ...
	I0821 11:07:00.951729  194386 filesync.go:149] local asset: /home/jenkins/minikube-integration/17102-5717/.minikube/files/etc/ssl/certs/124602.pem -> 124602.pem in /etc/ssl/certs
	I0821 11:07:00.951850  194386 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0821 11:07:00.961432  194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/files/etc/ssl/certs/124602.pem --> /etc/ssl/certs/124602.pem (1708 bytes)
	I0821 11:07:00.985402  194386 start.go:303] post-start completed in 147.84441ms
	I0821 11:07:00.985499  194386 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0821 11:07:00.985544  194386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-433377
	I0821 11:07:01.002824  194386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32981 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/kubernetes-upgrade-433377/id_rsa Username:docker}
	I0821 11:07:01.092015  194386 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0821 11:07:01.096274  194386 fix.go:56] fixHost completed within 4.626829787s
	I0821 11:07:01.096295  194386 start.go:83] releasing machines lock for "kubernetes-upgrade-433377", held for 4.626879169s
	I0821 11:07:01.096358  194386 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-433377
	I0821 11:07:01.113228  194386 ssh_runner.go:195] Run: cat /version.json
	I0821 11:07:01.113269  194386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-433377
	I0821 11:07:01.113338  194386 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0821 11:07:01.113428  194386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-433377
	I0821 11:07:01.131329  194386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32981 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/kubernetes-upgrade-433377/id_rsa Username:docker}
	I0821 11:07:01.131582  194386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32981 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/kubernetes-upgrade-433377/id_rsa Username:docker}
	I0821 11:07:01.320695  194386 ssh_runner.go:195] Run: systemctl --version
	I0821 11:07:01.324779  194386 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0821 11:07:01.470135  194386 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0821 11:07:01.474465  194386 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0821 11:07:01.483424  194386 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0821 11:07:01.483506  194386 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0821 11:07:01.491401  194386 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0821 11:07:01.491426  194386 start.go:466] detecting cgroup driver to use...
	I0821 11:07:01.491460  194386 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0821 11:07:01.491508  194386 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0821 11:07:01.502121  194386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0821 11:07:01.512173  194386 docker.go:196] disabling cri-docker service (if available) ...
	I0821 11:07:01.512235  194386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0821 11:07:01.523669  194386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0821 11:07:01.535046  194386 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0821 11:07:01.630824  194386 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0821 11:07:01.708403  194386 docker.go:212] disabling docker service ...
	I0821 11:07:01.708464  194386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0821 11:07:01.719840  194386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0821 11:07:01.730756  194386 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0821 11:07:01.821821  194386 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0821 11:07:01.907873  194386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0821 11:07:01.918134  194386 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0821 11:07:01.933609  194386 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0821 11:07:01.933661  194386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 11:07:01.943122  194386 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0821 11:07:01.943171  194386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 11:07:01.952554  194386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 11:07:01.961284  194386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 11:07:01.970518  194386 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0821 11:07:01.979248  194386 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0821 11:07:01.987410  194386 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0821 11:07:01.995453  194386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 11:07:02.073697  194386 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0821 11:07:02.847220  194386 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0821 11:07:02.847288  194386 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0821 11:07:02.850675  194386 start.go:534] Will wait 60s for crictl version
	I0821 11:07:02.850729  194386 ssh_runner.go:195] Run: which crictl
	I0821 11:07:02.853978  194386 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0821 11:07:02.889730  194386 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0821 11:07:02.889808  194386 ssh_runner.go:195] Run: crio --version
	I0821 11:07:02.924858  194386 ssh_runner.go:195] Run: crio --version
	I0821 11:07:02.964727  194386 out.go:177] * Preparing Kubernetes v1.28.0-rc.1 on CRI-O 1.24.6 ...
	I0821 11:06:59.569523  188079 pod_ready.go:102] pod "coredns-5d78c9869d-cw6gk" in "kube-system" namespace has status "Ready":"False"
	I0821 11:07:02.070307  188079 pod_ready.go:102] pod "coredns-5d78c9869d-cw6gk" in "kube-system" namespace has status "Ready":"False"
	I0821 11:07:04.069889  188079 pod_ready.go:92] pod "coredns-5d78c9869d-cw6gk" in "kube-system" namespace has status "Ready":"True"
	I0821 11:07:04.069911  188079 pod_ready.go:81] duration metric: took 13.516869313s waiting for pod "coredns-5d78c9869d-cw6gk" in "kube-system" namespace to be "Ready" ...
	I0821 11:07:04.069932  188079 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-942142" in "kube-system" namespace to be "Ready" ...
	I0821 11:07:04.074615  188079 pod_ready.go:92] pod "etcd-pause-942142" in "kube-system" namespace has status "Ready":"True"
	I0821 11:07:04.074633  188079 pod_ready.go:81] duration metric: took 4.695377ms waiting for pod "etcd-pause-942142" in "kube-system" namespace to be "Ready" ...
	I0821 11:07:04.074646  188079 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-942142" in "kube-system" namespace to be "Ready" ...
	I0821 11:07:04.080100  188079 pod_ready.go:92] pod "kube-apiserver-pause-942142" in "kube-system" namespace has status "Ready":"True"
	I0821 11:07:04.080128  188079 pod_ready.go:81] duration metric: took 5.474752ms waiting for pod "kube-apiserver-pause-942142" in "kube-system" namespace to be "Ready" ...
	I0821 11:07:04.080143  188079 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-942142" in "kube-system" namespace to be "Ready" ...
	I0821 11:07:04.085038  188079 pod_ready.go:92] pod "kube-controller-manager-pause-942142" in "kube-system" namespace has status "Ready":"True"
	I0821 11:07:04.085063  188079 pod_ready.go:81] duration metric: took 4.911727ms waiting for pod "kube-controller-manager-pause-942142" in "kube-system" namespace to be "Ready" ...
	I0821 11:07:04.085076  188079 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vbspt" in "kube-system" namespace to be "Ready" ...
	I0821 11:07:04.090042  188079 pod_ready.go:92] pod "kube-proxy-vbspt" in "kube-system" namespace has status "Ready":"True"
	I0821 11:07:04.090063  188079 pod_ready.go:81] duration metric: took 4.980637ms waiting for pod "kube-proxy-vbspt" in "kube-system" namespace to be "Ready" ...
	I0821 11:07:04.090074  188079 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-942142" in "kube-system" namespace to be "Ready" ...
	I0821 11:07:04.467663  188079 pod_ready.go:92] pod "kube-scheduler-pause-942142" in "kube-system" namespace has status "Ready":"True"
	I0821 11:07:04.467699  188079 pod_ready.go:81] duration metric: took 377.614764ms waiting for pod "kube-scheduler-pause-942142" in "kube-system" namespace to be "Ready" ...
	I0821 11:07:04.467711  188079 pod_ready.go:38] duration metric: took 15.938249122s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0821 11:07:04.467731  188079 api_server.go:52] waiting for apiserver process to appear ...
	I0821 11:07:04.467797  188079 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0821 11:07:04.478621  188079 api_server.go:72] duration metric: took 16.013450754s to wait for apiserver process to appear ...
	I0821 11:07:04.478649  188079 api_server.go:88] waiting for apiserver healthz status ...
	I0821 11:07:04.478671  188079 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0821 11:07:04.483399  188079 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0821 11:07:04.484477  188079 api_server.go:141] control plane version: v1.27.4
	I0821 11:07:04.484497  188079 api_server.go:131] duration metric: took 5.84138ms to wait for apiserver health ...
	I0821 11:07:04.484505  188079 system_pods.go:43] waiting for kube-system pods to appear ...
	I0821 11:07:04.669603  188079 system_pods.go:59] 8 kube-system pods found
	I0821 11:07:04.669633  188079 system_pods.go:61] "coredns-5d78c9869d-2fzvb" [d4ffa418-e75b-4b1b-a386-f376ea79a072] Running
	I0821 11:07:04.669637  188079 system_pods.go:61] "coredns-5d78c9869d-cw6gk" [f27ce3bb-203e-4d6b-988a-bdb76928ec2f] Running
	I0821 11:07:04.669642  188079 system_pods.go:61] "etcd-pause-942142" [de553f76-9b52-4880-8853-f7fbc1e46d1c] Running
	I0821 11:07:04.669647  188079 system_pods.go:61] "kindnet-qrlk5" [f0c3caaf-c929-49de-97b5-f2ad04d37a2c] Running
	I0821 11:07:04.669651  188079 system_pods.go:61] "kube-apiserver-pause-942142" [5f01265b-71bb-4303-b0e2-d395043684a8] Running
	I0821 11:07:04.669656  188079 system_pods.go:61] "kube-controller-manager-pause-942142" [9d526837-7c56-4997-98b1-41d391a8dbfe] Running
	I0821 11:07:04.669660  188079 system_pods.go:61] "kube-proxy-vbspt" [86d66575-f671-4128-a820-81d28df6b57b] Running
	I0821 11:07:04.669666  188079 system_pods.go:61] "kube-scheduler-pause-942142" [2f5fe7c1-dbff-4666-8372-54b630511290] Running
	I0821 11:07:04.669672  188079 system_pods.go:74] duration metric: took 185.163067ms to wait for pod list to return data ...
	I0821 11:07:04.669679  188079 default_sa.go:34] waiting for default service account to be created ...
	I0821 11:07:04.867263  188079 default_sa.go:45] found service account: "default"
	I0821 11:07:04.867289  188079 default_sa.go:55] duration metric: took 197.602175ms for default service account to be created ...
	I0821 11:07:04.867300  188079 system_pods.go:116] waiting for k8s-apps to be running ...
	I0821 11:07:05.072522  188079 system_pods.go:86] 8 kube-system pods found
	I0821 11:07:05.072561  188079 system_pods.go:89] "coredns-5d78c9869d-2fzvb" [d4ffa418-e75b-4b1b-a386-f376ea79a072] Running
	I0821 11:07:05.072570  188079 system_pods.go:89] "coredns-5d78c9869d-cw6gk" [f27ce3bb-203e-4d6b-988a-bdb76928ec2f] Running
	I0821 11:07:05.072577  188079 system_pods.go:89] "etcd-pause-942142" [de553f76-9b52-4880-8853-f7fbc1e46d1c] Running
	I0821 11:07:05.072585  188079 system_pods.go:89] "kindnet-qrlk5" [f0c3caaf-c929-49de-97b5-f2ad04d37a2c] Running
	I0821 11:07:05.072593  188079 system_pods.go:89] "kube-apiserver-pause-942142" [5f01265b-71bb-4303-b0e2-d395043684a8] Running
	I0821 11:07:05.072609  188079 system_pods.go:89] "kube-controller-manager-pause-942142" [9d526837-7c56-4997-98b1-41d391a8dbfe] Running
	I0821 11:07:05.072616  188079 system_pods.go:89] "kube-proxy-vbspt" [86d66575-f671-4128-a820-81d28df6b57b] Running
	I0821 11:07:05.072624  188079 system_pods.go:89] "kube-scheduler-pause-942142" [2f5fe7c1-dbff-4666-8372-54b630511290] Running
	I0821 11:07:05.072632  188079 system_pods.go:126] duration metric: took 205.326598ms to wait for k8s-apps to be running ...
	I0821 11:07:05.072641  188079 system_svc.go:44] waiting for kubelet service to be running ....
	I0821 11:07:05.072691  188079 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0821 11:07:05.091803  188079 system_svc.go:56] duration metric: took 19.150807ms WaitForService to wait for kubelet.
	I0821 11:07:05.091834  188079 kubeadm.go:581] duration metric: took 16.626667922s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0821 11:07:05.091856  188079 node_conditions.go:102] verifying NodePressure condition ...
	I0821 11:07:05.267303  188079 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0821 11:07:05.267326  188079 node_conditions.go:123] node cpu capacity is 8
	I0821 11:07:05.267336  188079 node_conditions.go:105] duration metric: took 175.47601ms to run NodePressure ...
	I0821 11:07:05.267345  188079 start.go:228] waiting for startup goroutines ...
	I0821 11:07:05.267372  188079 start.go:233] waiting for cluster config update ...
	I0821 11:07:05.267382  188079 start.go:242] writing updated cluster config ...
	I0821 11:07:05.267659  188079 ssh_runner.go:195] Run: rm -f paused
	I0821 11:07:05.353701  188079 start.go:600] kubectl: 1.28.0, cluster: 1.27.4 (minor skew: 1)
	I0821 11:07:05.362183  188079 out.go:177] * Done! kubectl is now configured to use "pause-942142" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Aug 21 11:06:44 pause-942142 crio[3206]: time="2023-08-21 11:06:44.177586845Z" level=info msg="Started container" PID=3715 containerID=6b469ef8743fed7a9528c3aeaf124bb3b503d6bd4d5d0d6cc1d0a5bd34f9cce3 description=kube-system/kube-apiserver-pause-942142/kube-apiserver id=cccc3c23-38b1-4b66-bb4b-19bd3aefc53c name=/runtime.v1.RuntimeService/StartContainer sandboxID=e51e396e2e7d38c752d84a5cc95e441f03b7ee670d32dd7978dd26ff0703428f
	Aug 21 11:06:44 pause-942142 crio[3206]: time="2023-08-21 11:06:44.185382128Z" level=info msg="Started container" PID=3714 containerID=54f4e9a4e3c929269d0b605a99f65c21935aacd534be322a7d20762aada66820 description=kube-system/kube-controller-manager-pause-942142/kube-controller-manager id=0d5a96ad-c065-4c9c-81de-b143afc83058 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0b55e173999efde3100ba7dc963e78fedd2d51bc883a80e7a6a6402334a74d92
	Aug 21 11:06:47 pause-942142 crio[3206]: time="2023-08-21 11:06:47.249249915Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Aug 21 11:06:47 pause-942142 crio[3206]: time="2023-08-21 11:06:47.255244744Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 21 11:06:47 pause-942142 crio[3206]: time="2023-08-21 11:06:47.255287363Z" level=info msg="Updated default CNI network name to kindnet"
	Aug 21 11:06:47 pause-942142 crio[3206]: time="2023-08-21 11:06:47.255304809Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Aug 21 11:06:47 pause-942142 crio[3206]: time="2023-08-21 11:06:47.263670352Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 21 11:06:47 pause-942142 crio[3206]: time="2023-08-21 11:06:47.263706205Z" level=info msg="Updated default CNI network name to kindnet"
	Aug 21 11:06:47 pause-942142 crio[3206]: time="2023-08-21 11:06:47.263723075Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Aug 21 11:06:47 pause-942142 crio[3206]: time="2023-08-21 11:06:47.339890699Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 21 11:06:47 pause-942142 crio[3206]: time="2023-08-21 11:06:47.339934926Z" level=info msg="Updated default CNI network name to kindnet"
	Aug 21 11:06:47 pause-942142 crio[3206]: time="2023-08-21 11:06:47.339958858Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Aug 21 11:06:47 pause-942142 crio[3206]: time="2023-08-21 11:06:47.344162198Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 21 11:06:47 pause-942142 crio[3206]: time="2023-08-21 11:06:47.344199730Z" level=info msg="Updated default CNI network name to kindnet"
	Aug 21 11:07:02 pause-942142 crio[3206]: time="2023-08-21 11:07:02.454249526Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.10.1" id=9a346d35-0418-4357-808b-b1fcb56928de name=/runtime.v1.ImageService/ImageStatus
	Aug 21 11:07:02 pause-942142 crio[3206]: time="2023-08-21 11:07:02.454500303Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378],Size_:53621675,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=9a346d35-0418-4357-808b-b1fcb56928de name=/runtime.v1.ImageService/ImageStatus
	Aug 21 11:07:02 pause-942142 crio[3206]: time="2023-08-21 11:07:02.457971765Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.10.1" id=adeb9401-ff20-4477-9029-21f9e0aefab1 name=/runtime.v1.ImageService/ImageStatus
	Aug 21 11:07:02 pause-942142 crio[3206]: time="2023-08-21 11:07:02.458214516Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378],Size_:53621675,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=adeb9401-ff20-4477-9029-21f9e0aefab1 name=/runtime.v1.ImageService/ImageStatus
	Aug 21 11:07:02 pause-942142 crio[3206]: time="2023-08-21 11:07:02.459250133Z" level=info msg="Creating container: kube-system/coredns-5d78c9869d-cw6gk/coredns" id=f7affeba-4d5b-4b3a-8426-3b33b9b93088 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 21 11:07:02 pause-942142 crio[3206]: time="2023-08-21 11:07:02.459394283Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 21 11:07:02 pause-942142 crio[3206]: time="2023-08-21 11:07:02.663435417Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/473b96a41bf819b26ec93f1045995b3a135277bf23b2b8702667bee444ab6716/merged/etc/passwd: no such file or directory"
	Aug 21 11:07:02 pause-942142 crio[3206]: time="2023-08-21 11:07:02.663476167Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/473b96a41bf819b26ec93f1045995b3a135277bf23b2b8702667bee444ab6716/merged/etc/group: no such file or directory"
	Aug 21 11:07:02 pause-942142 crio[3206]: time="2023-08-21 11:07:02.797188589Z" level=info msg="Created container f835818a31b3eac4ddd96ea64c695523f75dd3f85fd579f47b4c2cb5ad6cf7bd: kube-system/coredns-5d78c9869d-cw6gk/coredns" id=f7affeba-4d5b-4b3a-8426-3b33b9b93088 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 21 11:07:02 pause-942142 crio[3206]: time="2023-08-21 11:07:02.797778298Z" level=info msg="Starting container: f835818a31b3eac4ddd96ea64c695523f75dd3f85fd579f47b4c2cb5ad6cf7bd" id=d8e2e341-267e-4f79-b263-eb846cb4fad0 name=/runtime.v1.RuntimeService/StartContainer
	Aug 21 11:07:02 pause-942142 crio[3206]: time="2023-08-21 11:07:02.807525152Z" level=info msg="Started container" PID=4096 containerID=f835818a31b3eac4ddd96ea64c695523f75dd3f85fd579f47b4c2cb5ad6cf7bd description=kube-system/coredns-5d78c9869d-cw6gk/coredns id=d8e2e341-267e-4f79-b263-eb846cb4fad0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=16e97ff277a28122017474cb4e6b067ec9cb646189d86462dd8f408a79f9a86b
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f835818a31b3e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   3 seconds ago       Running             coredns                   2                   16e97ff277a28       coredns-5d78c9869d-cw6gk
	54f4e9a4e3c92       f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5   22 seconds ago      Running             kube-controller-manager   2                   0b55e173999ef       kube-controller-manager-pause-942142
	6b469ef8743fe       e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c   22 seconds ago      Running             kube-apiserver            2                   e51e396e2e7d3       kube-apiserver-pause-942142
	e277e19aebf2c       98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16   22 seconds ago      Running             kube-scheduler            2                   69f36d87548b3       kube-scheduler-pause-942142
	e4f2bb098d137       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da   22 seconds ago      Running             kindnet-cni               2                   d9fcc0be76b03       kindnet-qrlk5
	6fd177b85deca       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681   22 seconds ago      Running             etcd                      2                   c045432d7748a       etcd-pause-942142
	7b334173daa92       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   22 seconds ago      Running             coredns                   2                   b4afa50bde8f0       coredns-5d78c9869d-2fzvb
	544c01d0a3442       6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4   23 seconds ago      Running             kube-proxy                2                   c2212720a9625       kube-proxy-vbspt
	327d0fb7dbbc5       e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c   34 seconds ago      Exited              kube-apiserver            1                   e51e396e2e7d3       kube-apiserver-pause-942142
	9581f7f8ccbc1       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   34 seconds ago      Exited              coredns                   1                   b4afa50bde8f0       coredns-5d78c9869d-2fzvb
	05325b9d295d4       f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5   34 seconds ago      Exited              kube-controller-manager   1                   0b55e173999ef       kube-controller-manager-pause-942142
	179d2ba79aac8       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   34 seconds ago      Exited              coredns                   1                   16e97ff277a28       coredns-5d78c9869d-cw6gk
	0828dc2ced588       6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4   34 seconds ago      Exited              kube-proxy                1                   c2212720a9625       kube-proxy-vbspt
	fbe17bd915d26       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da   35 seconds ago      Exited              kindnet-cni               1                   d9fcc0be76b03       kindnet-qrlk5
	31d22f771e570       98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16   35 seconds ago      Exited              kube-scheduler            1                   69f36d87548b3       kube-scheduler-pause-942142
	9e6691b39ff42       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681   35 seconds ago      Exited              etcd                      1                   c045432d7748a       etcd-pause-942142
	
	* 
	* ==> coredns [179d2ba79aac88d8d5a28cf3b3f3c792df2868d30b9d3e76e99879a3547221e3] <==
	* [INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:51750 - 8698 "HINFO IN 2352085751448041643.1116235000315882265. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.056761918s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> coredns [7b334173daa9205e3def69c0c9aa3fde5b5022a9f338e2895d0fcbf8ae76dc7e] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:42211 - 45657 "HINFO IN 4300767555842199613.3515570745801274251. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.045226569s
	
	* 
	* ==> coredns [9581f7f8ccbc19ef0ec935172c03cd5035fb7a36198d2f218807299959d3a846] <==
	* 
	* 
	* ==> coredns [f835818a31b3eac4ddd96ea64c695523f75dd3f85fd579f47b4c2cb5ad6cf7bd] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54775 - 1612 "HINFO IN 1639401811397350260.1326845661261012929. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.057596052s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-942142
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-942142
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43
	                    minikube.k8s.io/name=pause-942142
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_21T11_06_06_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 21 Aug 2023 11:06:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-942142
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 21 Aug 2023 11:06:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 21 Aug 2023 11:06:20 +0000   Mon, 21 Aug 2023 11:05:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 21 Aug 2023 11:06:20 +0000   Mon, 21 Aug 2023 11:05:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 21 Aug 2023 11:06:20 +0000   Mon, 21 Aug 2023 11:05:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 21 Aug 2023 11:06:20 +0000   Mon, 21 Aug 2023 11:06:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-942142
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	System Info:
	  Machine ID:                 0fc06879fd5d45638274c1305316b091
	  System UUID:                e347383a-6298-4d71-aa9f-53ae7e22a7ae
	  Boot ID:                    19bba9d5-fb53-4c36-8f17-b39d772f0931
	  Kernel Version:             5.15.0-1039-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-2fzvb                100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     48s
	  kube-system                 coredns-5d78c9869d-cw6gk                100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     48s
	  kube-system                 etcd-pause-942142                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         61s
	  kube-system                 kindnet-qrlk5                           100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      48s
	  kube-system                 kube-apiserver-pause-942142             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kube-system                 kube-controller-manager-pause-942142    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kube-system                 kube-proxy-vbspt                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         48s
	  kube-system                 kube-scheduler-pause-942142             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             290Mi (0%!)(MISSING)  390Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 47s   kube-proxy       
	  Normal  Starting                 19s   kube-proxy       
	  Normal  Starting                 61s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s   kubelet          Node pause-942142 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s   kubelet          Node pause-942142 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s   kubelet          Node pause-942142 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           49s   node-controller  Node pause-942142 event: Registered Node pause-942142 in Controller
	  Normal  NodeReady                46s   kubelet          Node pause-942142 status is now: NodeReady
	  Normal  RegisteredNode           8s    node-controller  Node pause-942142 event: Registered Node pause-942142 in Controller
	
	* 
	* ==> dmesg <==
	* [  +4.191597] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ede9cfb77cb9
	[  +0.000006] ll header: 00000000: 02 42 0a 3e a8 14 02 42 c0 a8 3a 02 08 00
	[  +8.191252] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ede9cfb77cb9
	[  +0.000008] ll header: 00000000: 02 42 0a 3e a8 14 02 42 c0 a8 3a 02 08 00
	[Aug21 10:58] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-ede9cfb77cb9
	[  +0.000007] ll header: 00000000: 02 42 0a 3e a8 14 02 42 c0 a8 3a 02 08 00
	[  +1.016767] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-ede9cfb77cb9
	[  +0.000008] ll header: 00000000: 02 42 0a 3e a8 14 02 42 c0 a8 3a 02 08 00
	[  +2.015803] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-ede9cfb77cb9
	[  +0.000006] ll header: 00000000: 02 42 0a 3e a8 14 02 42 c0 a8 3a 02 08 00
	[  +4.031606] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-ede9cfb77cb9
	[  +0.000007] ll header: 00000000: 02 42 0a 3e a8 14 02 42 c0 a8 3a 02 08 00
	[  +8.191228] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-ede9cfb77cb9
	[  +0.000007] ll header: 00000000: 02 42 0a 3e a8 14 02 42 c0 a8 3a 02 08 00
	[Aug21 11:01] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-db3bed39ab33
	[  +0.000006] ll header: 00000000: 02 42 37 03 4b 8d 02 42 c0 a8 43 02 08 00
	[  +1.002962] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-db3bed39ab33
	[  +0.000006] ll header: 00000000: 02 42 37 03 4b 8d 02 42 c0 a8 43 02 08 00
	[  +2.015772] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-db3bed39ab33
	[  +0.000022] ll header: 00000000: 02 42 37 03 4b 8d 02 42 c0 a8 43 02 08 00
	[  +4.063611] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-db3bed39ab33
	[  +0.000006] ll header: 00000000: 02 42 37 03 4b 8d 02 42 c0 a8 43 02 08 00
	[  +8.191184] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-db3bed39ab33
	[  +0.000007] ll header: 00000000: 02 42 37 03 4b 8d 02 42 c0 a8 43 02 08 00
	[Aug21 11:04] process 'docker/tmp/qemu-check476447643/check' started with executable stack
	
	* 
	* ==> etcd [6fd177b85deca00ecd00526f81b113b77eab41cd0c23ca9ae8fc061ba5c61c5d] <==
	* {"level":"info","ts":"2023-08-21T11:06:44.242Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-08-21T11:06:44.248Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-08-21T11:06:45.881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2023-08-21T11:06:45.881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-08-21T11:06:45.881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2023-08-21T11:06:45.881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2023-08-21T11:06:45.881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-08-21T11:06:45.881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2023-08-21T11:06:45.881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-08-21T11:06:45.882Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-942142 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-21T11:06:45.882Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-21T11:06:45.882Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-21T11:06:45.882Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-21T11:06:45.882Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-08-21T11:06:45.883Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-21T11:06:45.883Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2023-08-21T11:06:53.725Z","caller":"traceutil/trace.go:171","msg":"trace[1699630513] transaction","detail":"{read_only:false; response_revision:417; number_of_response:1; }","duration":"118.671593ms","start":"2023-08-21T11:06:53.607Z","end":"2023-08-21T11:06:53.725Z","steps":["trace[1699630513] 'process raft request'  (duration: 118.522354ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-21T11:06:53.860Z","caller":"traceutil/trace.go:171","msg":"trace[2036191968] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"129.636554ms","start":"2023-08-21T11:06:53.730Z","end":"2023-08-21T11:06:53.860Z","steps":["trace[2036191968] 'process raft request'  (duration: 129.499422ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-21T11:06:54.060Z","caller":"traceutil/trace.go:171","msg":"trace[1078095373] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"195.870037ms","start":"2023-08-21T11:06:53.864Z","end":"2023-08-21T11:06:54.060Z","steps":["trace[1078095373] 'process raft request'  (duration: 119.434991ms)","trace[1078095373] 'compare'  (duration: 76.348735ms)"],"step_count":2}
	{"level":"info","ts":"2023-08-21T11:06:54.236Z","caller":"traceutil/trace.go:171","msg":"trace[615549974] transaction","detail":"{read_only:false; response_revision:421; number_of_response:1; }","duration":"110.442012ms","start":"2023-08-21T11:06:54.125Z","end":"2023-08-21T11:06:54.236Z","steps":["trace[615549974] 'process raft request'  (duration: 56.82652ms)","trace[615549974] 'compare'  (duration: 53.517724ms)"],"step_count":2}
	{"level":"warn","ts":"2023-08-21T11:06:54.515Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.918396ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/kube-apiserver-pause-942142.177d614550e7993d\" ","response":"range_response_count:1 size:781"}
	{"level":"info","ts":"2023-08-21T11:06:54.515Z","caller":"traceutil/trace.go:171","msg":"trace[2002545719] range","detail":"{range_begin:/registry/events/kube-system/kube-apiserver-pause-942142.177d614550e7993d; range_end:; response_count:1; response_revision:422; }","duration":"178.0095ms","start":"2023-08-21T11:06:54.337Z","end":"2023-08-21T11:06:54.515Z","steps":["trace[2002545719] 'range keys from in-memory index tree'  (duration: 177.808813ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-21T11:06:54.856Z","caller":"traceutil/trace.go:171","msg":"trace[1201774658] transaction","detail":"{read_only:false; response_revision:427; number_of_response:1; }","duration":"119.751483ms","start":"2023-08-21T11:06:54.736Z","end":"2023-08-21T11:06:54.856Z","steps":["trace[1201774658] 'process raft request'  (duration: 60.521186ms)","trace[1201774658] 'compare'  (duration: 59.118239ms)"],"step_count":2}
	{"level":"info","ts":"2023-08-21T11:06:55.067Z","caller":"traceutil/trace.go:171","msg":"trace[491422818] transaction","detail":"{read_only:false; response_revision:432; number_of_response:1; }","duration":"124.300399ms","start":"2023-08-21T11:06:54.943Z","end":"2023-08-21T11:06:55.067Z","steps":["trace[491422818] 'process raft request'  (duration: 65.843692ms)","trace[491422818] 'compare'  (duration: 58.329852ms)"],"step_count":2}
	{"level":"info","ts":"2023-08-21T11:06:55.525Z","caller":"traceutil/trace.go:171","msg":"trace[738112209] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"100.408215ms","start":"2023-08-21T11:06:55.425Z","end":"2023-08-21T11:06:55.525Z","steps":["trace[738112209] 'process raft request'  (duration: 100.28954ms)"],"step_count":1}
	
	* 
	* ==> etcd [9e6691b39ff4244c52bee2ef0e9a98f32e3aee856e59421957e8d0ce5ad633a5] <==
	* {"level":"info","ts":"2023-08-21T11:06:32.144Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"804.306µs"}
	{"level":"info","ts":"2023-08-21T11:06:32.147Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2023-08-21T11:06:32.151Z","caller":"etcdserver/raft.go:529","msg":"restarting local member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","commit-index":406}
	{"level":"info","ts":"2023-08-21T11:06:32.151Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=()"}
	{"level":"info","ts":"2023-08-21T11:06:32.152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became follower at term 2"}
	{"level":"info","ts":"2023-08-21T11:06:32.152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft ea7e25599daad906 [peers: [], term: 2, commit: 406, applied: 0, lastindex: 406, lastterm: 2]"}
	{"level":"warn","ts":"2023-08-21T11:06:32.156Z","caller":"auth/store.go:1234","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2023-08-21T11:06:32.159Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":391}
	{"level":"info","ts":"2023-08-21T11:06:32.162Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2023-08-21T11:06:32.169Z","caller":"etcdserver/corrupt.go:95","msg":"starting initial corruption check","local-member-id":"ea7e25599daad906","timeout":"7s"}
	{"level":"info","ts":"2023-08-21T11:06:32.238Z","caller":"etcdserver/corrupt.go:165","msg":"initial corruption checking passed; no corruption","local-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2023-08-21T11:06:32.238Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"ea7e25599daad906","local-server-version":"3.5.7","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-08-21T11:06:32.244Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-21T11:06:32.244Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-21T11:06:32.244Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-21T11:06:32.245Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-08-21T11:06:32.246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2023-08-21T11:06:32.246Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2023-08-21T11:06:32.246Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-08-21T11:06:32.247Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-08-21T11:06:32.247Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-08-21T11:06:32.246Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T11:06:32.247Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T11:06:32.246Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-08-21T11:06:32.248Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.76.2:2380"}
	
	* 
	* ==> kernel <==
	*  11:07:06 up 49 min,  0 users,  load average: 4.64, 3.27, 1.77
	Linux pause-942142 5.15.0-1039-gcp #47~20.04.1-Ubuntu SMP Thu Jul 27 22:40:03 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [e4f2bb098d13767e250b277c867cad4b943daf14274a6d06432efeadfcf1a383] <==
	* I0821 11:06:44.040329       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0821 11:06:44.040393       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0821 11:06:44.040594       1 main.go:116] setting mtu 1500 for CNI 
	I0821 11:06:44.040617       1 main.go:146] kindnetd IP family: "ipv4"
	I0821 11:06:44.040644       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0821 11:06:44.344031       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0821 11:06:44.344407       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0821 11:06:47.248934       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0821 11:06:47.248962       1 main.go:227] handling current node
	I0821 11:06:57.262464       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0821 11:06:57.262486       1 main.go:227] handling current node
	
	* 
	* ==> kindnet [fbe17bd915d26206d4070a22a06d468687f9e9ac3c757f3682e5252af105dba4] <==
	* I0821 11:06:32.160108       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0821 11:06:32.160994       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0821 11:06:32.161305       1 main.go:116] setting mtu 1500 for CNI 
	I0821 11:06:32.161380       1 main.go:146] kindnetd IP family: "ipv4"
	I0821 11:06:32.161463       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	
	* 
	* ==> kube-apiserver [327d0fb7dbbc50c0b1c66ffd711df33c2e0471a307c78992fe70be174ef461bd] <==
	* 
	* 
	* ==> kube-apiserver [6b469ef8743fed7a9528c3aeaf124bb3b503d6bd4d5d0d6cc1d0a5bd34f9cce3] <==
	* I0821 11:06:47.034084       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0821 11:06:47.034147       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0821 11:06:47.034186       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0821 11:06:47.033959       1 controller.go:85] Starting OpenAPI V3 controller
	I0821 11:06:47.034188       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0821 11:06:47.033973       1 naming_controller.go:291] Starting NamingConditionController
	E0821 11:06:47.163601       1 controller.go:155] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0821 11:06:47.165750       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0821 11:06:47.235992       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0821 11:06:47.236140       1 aggregator.go:152] initial CRD sync complete...
	I0821 11:06:47.236175       1 autoregister_controller.go:141] Starting autoregister controller
	I0821 11:06:47.236226       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0821 11:06:47.236258       1 cache.go:39] Caches are synced for autoregister controller
	I0821 11:06:47.236764       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0821 11:06:47.237282       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0821 11:06:47.237309       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0821 11:06:47.238874       1 shared_informer.go:318] Caches are synced for configmaps
	I0821 11:06:47.241445       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0821 11:06:47.242740       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0821 11:06:47.244373       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0821 11:06:47.806268       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0821 11:06:48.034313       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0821 11:06:58.452021       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0821 11:06:58.482655       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0821 11:06:58.497981       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [05325b9d295d4de54fc9eee30cdda179eed8761e957e48fc795f9c62de082588] <==
	* 
	* 
	* ==> kube-controller-manager [54f4e9a4e3c929269d0b605a99f65c21935aacd534be322a7d20762aada66820] <==
	* I0821 11:06:58.448266       1 shared_informer.go:318] Caches are synced for job
	I0821 11:06:58.448789       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0821 11:06:58.452168       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0821 11:06:58.453340       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0821 11:06:58.454504       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0821 11:06:58.456165       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0821 11:06:58.466108       1 shared_informer.go:318] Caches are synced for namespace
	I0821 11:06:58.469424       1 shared_informer.go:318] Caches are synced for deployment
	I0821 11:06:58.471701       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0821 11:06:58.475194       1 shared_informer.go:318] Caches are synced for daemon sets
	I0821 11:06:58.478768       1 shared_informer.go:318] Caches are synced for expand
	I0821 11:06:58.478850       1 shared_informer.go:318] Caches are synced for PV protection
	I0821 11:06:58.480968       1 shared_informer.go:318] Caches are synced for PVC protection
	I0821 11:06:58.487541       1 shared_informer.go:318] Caches are synced for GC
	I0821 11:06:58.487576       1 shared_informer.go:318] Caches are synced for crt configmap
	I0821 11:06:58.487549       1 shared_informer.go:318] Caches are synced for endpoint
	I0821 11:06:58.488004       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5d78c9869d to 1 from 2"
	I0821 11:06:58.489970       1 shared_informer.go:318] Caches are synced for disruption
	I0821 11:06:58.543938       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5d78c9869d-cw6gk"
	I0821 11:06:58.572694       1 shared_informer.go:318] Caches are synced for resource quota
	I0821 11:06:58.581742       1 shared_informer.go:318] Caches are synced for resource quota
	I0821 11:06:58.608631       1 shared_informer.go:318] Caches are synced for attach detach
	I0821 11:06:59.045506       1 shared_informer.go:318] Caches are synced for garbage collector
	I0821 11:06:59.045522       1 shared_informer.go:318] Caches are synced for garbage collector
	I0821 11:06:59.045561       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-proxy [0828dc2ced588a98499aeca666b45dadc7a01c60197d187af4790a70e968ac60] <==
	* 
	* 
	* ==> kube-proxy [544c01d0a34426cdfe78783ac00f0b6d4d22107641e52eee037f178973c5645a] <==
	* E0821 11:06:43.882378       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-942142": dial tcp 192.168.76.2:8443: connect: connection refused
	I0821 11:06:47.264415       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I0821 11:06:47.264542       1 server_others.go:110] "Detected node IP" address="192.168.76.2"
	I0821 11:06:47.267497       1 server_others.go:554] "Using iptables proxy"
	I0821 11:06:47.361529       1 server_others.go:192] "Using iptables Proxier"
	I0821 11:06:47.361576       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0821 11:06:47.361586       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0821 11:06:47.361602       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0821 11:06:47.361638       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0821 11:06:47.362392       1 server.go:658] "Version info" version="v1.27.4"
	I0821 11:06:47.362456       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0821 11:06:47.363091       1 config.go:315] "Starting node config controller"
	I0821 11:06:47.363094       1 config.go:188] "Starting service config controller"
	I0821 11:06:47.363119       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0821 11:06:47.363124       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0821 11:06:47.363286       1 config.go:97] "Starting endpoint slice config controller"
	I0821 11:06:47.363326       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0821 11:06:47.463272       1 shared_informer.go:318] Caches are synced for service config
	I0821 11:06:47.463342       1 shared_informer.go:318] Caches are synced for node config
	I0821 11:06:47.464955       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [31d22f771e57016d722ad1604c45f8d36af9b4ef37787395b579eff24cc280d8] <==
	* 
	* 
	* ==> kube-scheduler [e277e19aebf2cdda3d12e800f195c0ee4097fb33065a1da2b37b3671a74a7277] <==
	* I0821 11:06:44.885439       1 serving.go:348] Generated self-signed cert in-memory
	W0821 11:06:47.073615       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0821 11:06:47.073732       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0821 11:06:47.073771       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0821 11:06:47.073812       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0821 11:06:47.163992       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.4"
	I0821 11:06:47.164108       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0821 11:06:47.166733       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0821 11:06:47.235657       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0821 11:06:47.235757       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0821 11:06:47.235777       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0821 11:06:47.336631       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Aug 21 11:06:43 pause-942142 kubelet[1567]: I0821 11:06:43.962626    1567 scope.go:115] "RemoveContainer" containerID="d6e073418281aeac0de2fe83f3ec9c4628e6b21045b247d27d79e3932634a8c4"
	Aug 21 11:06:44 pause-942142 kubelet[1567]: I0821 11:06:44.053202    1567 scope.go:115] "RemoveContainer" containerID="e54946981e64d13a71635a59d35d5b82c350dcd4844893d298edf4d888a69179"
	Aug 21 11:06:44 pause-942142 kubelet[1567]: I0821 11:06:44.136859    1567 scope.go:115] "RemoveContainer" containerID="ef09b7eb4707d223ba1f602fb7fab739839118125d9ce921bed2bce1b9aa3b70"
	Aug 21 11:06:44 pause-942142 kubelet[1567]: I0821 11:06:44.582710    1567 scope.go:115] "RemoveContainer" containerID="179d2ba79aac88d8d5a28cf3b3f3c792df2868d30b9d3e76e99879a3547221e3"
	Aug 21 11:06:44 pause-942142 kubelet[1567]: E0821 11:06:44.583190    1567 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-5d78c9869d-cw6gk_kube-system(f27ce3bb-203e-4d6b-988a-bdb76928ec2f)\"" pod="kube-system/coredns-5d78c9869d-cw6gk" podUID=f27ce3bb-203e-4d6b-988a-bdb76928ec2f
	Aug 21 11:06:45 pause-942142 kubelet[1567]: W0821 11:06:45.607626    1567 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Aug 21 11:06:45 pause-942142 kubelet[1567]: W0821 11:06:45.613742    1567 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Aug 21 11:06:50 pause-942142 kubelet[1567]: I0821 11:06:50.434588    1567 scope.go:115] "RemoveContainer" containerID="179d2ba79aac88d8d5a28cf3b3f3c792df2868d30b9d3e76e99879a3547221e3"
	Aug 21 11:06:50 pause-942142 kubelet[1567]: E0821 11:06:50.434968    1567 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-5d78c9869d-cw6gk_kube-system(f27ce3bb-203e-4d6b-988a-bdb76928ec2f)\"" pod="kube-system/coredns-5d78c9869d-cw6gk" podUID=f27ce3bb-203e-4d6b-988a-bdb76928ec2f
	Aug 21 11:06:50 pause-942142 kubelet[1567]: I0821 11:06:50.601092    1567 scope.go:115] "RemoveContainer" containerID="179d2ba79aac88d8d5a28cf3b3f3c792df2868d30b9d3e76e99879a3547221e3"
	Aug 21 11:06:50 pause-942142 kubelet[1567]: E0821 11:06:50.601467    1567 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-5d78c9869d-cw6gk_kube-system(f27ce3bb-203e-4d6b-988a-bdb76928ec2f)\"" pod="kube-system/coredns-5d78c9869d-cw6gk" podUID=f27ce3bb-203e-4d6b-988a-bdb76928ec2f
	Aug 21 11:06:55 pause-942142 kubelet[1567]: W0821 11:06:55.630164    1567 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Aug 21 11:07:02 pause-942142 kubelet[1567]: I0821 11:07:02.453617    1567 scope.go:115] "RemoveContainer" containerID="179d2ba79aac88d8d5a28cf3b3f3c792df2868d30b9d3e76e99879a3547221e3"
	Aug 21 11:07:05 pause-942142 kubelet[1567]: E0821 11:07:05.365896    1567 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/e8d86164e46c218910618cfb6079ce6f6df4767a1a99ce732155f63740fe4e59/diff" to get inode usage: stat /var/lib/containers/storage/overlay/e8d86164e46c218910618cfb6079ce6f6df4767a1a99ce732155f63740fe4e59/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-scheduler-pause-942142_977f703dfad04e101172ea1685382344/kube-scheduler/0.log" to get inode usage: stat /var/log/pods/kube-system_kube-scheduler-pause-942142_977f703dfad04e101172ea1685382344/kube-scheduler/0.log: no such file or directory
	Aug 21 11:07:05 pause-942142 kubelet[1567]: E0821 11:07:05.373083    1567 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/e53dc8cc5c0793358e3b8054c350cc10e12f83e5db8f27da73ef23a864323520/diff" to get inode usage: stat /var/lib/containers/storage/overlay/e53dc8cc5c0793358e3b8054c350cc10e12f83e5db8f27da73ef23a864323520/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-controller-manager-pause-942142_fd1c8725478202c37350e553fb750bf5/kube-controller-manager/0.log" to get inode usage: stat /var/log/pods/kube-system_kube-controller-manager-pause-942142_fd1c8725478202c37350e553fb750bf5/kube-controller-manager/0.log: no such file or directory
	Aug 21 11:07:05 pause-942142 kubelet[1567]: E0821 11:07:05.437803    1567 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/448c888fcf25878990a92e930d693a9d7bdf214d6e806d52c1f7b9c9deccd474/diff" to get inode usage: stat /var/lib/containers/storage/overlay/448c888fcf25878990a92e930d693a9d7bdf214d6e806d52c1f7b9c9deccd474/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-apiserver-pause-942142_2ef6a030af3a3f686c6cbfa234b153b4/kube-apiserver/0.log" to get inode usage: stat /var/log/pods/kube-system_kube-apiserver-pause-942142_2ef6a030af3a3f686c6cbfa234b153b4/kube-apiserver/0.log: no such file or directory
	Aug 21 11:07:05 pause-942142 kubelet[1567]: E0821 11:07:05.457871    1567 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/38051ae2d5fba9c92400d0676ae4d9713940756f324eff1dc08975b5d9a32565/diff" to get inode usage: stat /var/lib/containers/storage/overlay/38051ae2d5fba9c92400d0676ae4d9713940756f324eff1dc08975b5d9a32565/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_etcd-pause-942142_c8274e3bf6266e5d0e1b49746e7f9a41/etcd/0.log" to get inode usage: stat /var/log/pods/kube-system_etcd-pause-942142_c8274e3bf6266e5d0e1b49746e7f9a41/etcd/0.log: no such file or directory
	Aug 21 11:07:05 pause-942142 kubelet[1567]: E0821 11:07:05.569905    1567 manager.go:1106] Failed to create existing container: /crio-b4afa50bde8f0e095b733d89c3550dc0990e2efeceaf419a0833e989ed50ba1b: Error finding container b4afa50bde8f0e095b733d89c3550dc0990e2efeceaf419a0833e989ed50ba1b: Status 404 returned error can't find the container with id b4afa50bde8f0e095b733d89c3550dc0990e2efeceaf419a0833e989ed50ba1b
	Aug 21 11:07:05 pause-942142 kubelet[1567]: E0821 11:07:05.583681    1567 manager.go:1106] Failed to create existing container: /crio-d9fcc0be76b0302ec0df69f463682ba3e7b96b716d8725ffe49b8b9987f906c6: Error finding container d9fcc0be76b0302ec0df69f463682ba3e7b96b716d8725ffe49b8b9987f906c6: Status 404 returned error can't find the container with id d9fcc0be76b0302ec0df69f463682ba3e7b96b716d8725ffe49b8b9987f906c6
	Aug 21 11:07:05 pause-942142 kubelet[1567]: E0821 11:07:05.587876    1567 manager.go:1106] Failed to create existing container: /docker/309473feabb5ddf8e1b31a85fe1fa26965aae266f6a8b3dfd5daa9ce19790bd4/crio-b4afa50bde8f0e095b733d89c3550dc0990e2efeceaf419a0833e989ed50ba1b: Error finding container b4afa50bde8f0e095b733d89c3550dc0990e2efeceaf419a0833e989ed50ba1b: Status 404 returned error can't find the container with id b4afa50bde8f0e095b733d89c3550dc0990e2efeceaf419a0833e989ed50ba1b
	Aug 21 11:07:05 pause-942142 kubelet[1567]: E0821 11:07:05.588614    1567 manager.go:1106] Failed to create existing container: /docker/309473feabb5ddf8e1b31a85fe1fa26965aae266f6a8b3dfd5daa9ce19790bd4/crio-c2212720a9625c1d07398062176961401d967df6b8e4dc2fbb6de8553c0ae60e: Error finding container c2212720a9625c1d07398062176961401d967df6b8e4dc2fbb6de8553c0ae60e: Status 404 returned error can't find the container with id c2212720a9625c1d07398062176961401d967df6b8e4dc2fbb6de8553c0ae60e
	Aug 21 11:07:05 pause-942142 kubelet[1567]: E0821 11:07:05.592309    1567 manager.go:1106] Failed to create existing container: /docker/309473feabb5ddf8e1b31a85fe1fa26965aae266f6a8b3dfd5daa9ce19790bd4/crio-16e97ff277a28122017474cb4e6b067ec9cb646189d86462dd8f408a79f9a86b: Error finding container 16e97ff277a28122017474cb4e6b067ec9cb646189d86462dd8f408a79f9a86b: Status 404 returned error can't find the container with id 16e97ff277a28122017474cb4e6b067ec9cb646189d86462dd8f408a79f9a86b
	Aug 21 11:07:05 pause-942142 kubelet[1567]: E0821 11:07:05.593070    1567 manager.go:1106] Failed to create existing container: /crio-16e97ff277a28122017474cb4e6b067ec9cb646189d86462dd8f408a79f9a86b: Error finding container 16e97ff277a28122017474cb4e6b067ec9cb646189d86462dd8f408a79f9a86b: Status 404 returned error can't find the container with id 16e97ff277a28122017474cb4e6b067ec9cb646189d86462dd8f408a79f9a86b
	Aug 21 11:07:05 pause-942142 kubelet[1567]: E0821 11:07:05.594589    1567 manager.go:1106] Failed to create existing container: /docker/309473feabb5ddf8e1b31a85fe1fa26965aae266f6a8b3dfd5daa9ce19790bd4/crio-d9fcc0be76b0302ec0df69f463682ba3e7b96b716d8725ffe49b8b9987f906c6: Error finding container d9fcc0be76b0302ec0df69f463682ba3e7b96b716d8725ffe49b8b9987f906c6: Status 404 returned error can't find the container with id d9fcc0be76b0302ec0df69f463682ba3e7b96b716d8725ffe49b8b9987f906c6
	Aug 21 11:07:05 pause-942142 kubelet[1567]: E0821 11:07:05.596148    1567 manager.go:1106] Failed to create existing container: /crio-c2212720a9625c1d07398062176961401d967df6b8e4dc2fbb6de8553c0ae60e: Error finding container c2212720a9625c1d07398062176961401d967df6b8e4dc2fbb6de8553c0ae60e: Status 404 returned error can't find the container with id c2212720a9625c1d07398062176961401d967df6b8e4dc2fbb6de8553c0ae60e
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-942142 -n pause-942142
helpers_test.go:261: (dbg) Run:  kubectl --context pause-942142 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-942142
helpers_test.go:235: (dbg) docker inspect pause-942142:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "309473feabb5ddf8e1b31a85fe1fa26965aae266f6a8b3dfd5daa9ce19790bd4",
	        "Created": "2023-08-21T11:05:44.569615649Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 179454,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-21T11:05:44.915550528Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/309473feabb5ddf8e1b31a85fe1fa26965aae266f6a8b3dfd5daa9ce19790bd4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/309473feabb5ddf8e1b31a85fe1fa26965aae266f6a8b3dfd5daa9ce19790bd4/hostname",
	        "HostsPath": "/var/lib/docker/containers/309473feabb5ddf8e1b31a85fe1fa26965aae266f6a8b3dfd5daa9ce19790bd4/hosts",
	        "LogPath": "/var/lib/docker/containers/309473feabb5ddf8e1b31a85fe1fa26965aae266f6a8b3dfd5daa9ce19790bd4/309473feabb5ddf8e1b31a85fe1fa26965aae266f6a8b3dfd5daa9ce19790bd4-json.log",
	        "Name": "/pause-942142",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-942142:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-942142",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9a3cc4d3a8405a830bbcec04e407d53e79e7d407ce16e9d5fd07cfd156623f5b-init/diff:/var/lib/docker/overlay2/524bb0f129210e266d288d085768bab72d4735717d72ebbb4611a7bc558cb4ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9a3cc4d3a8405a830bbcec04e407d53e79e7d407ce16e9d5fd07cfd156623f5b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9a3cc4d3a8405a830bbcec04e407d53e79e7d407ce16e9d5fd07cfd156623f5b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9a3cc4d3a8405a830bbcec04e407d53e79e7d407ce16e9d5fd07cfd156623f5b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-942142",
	                "Source": "/var/lib/docker/volumes/pause-942142/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-942142",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-942142",
	                "name.minikube.sigs.k8s.io": "pause-942142",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f05339805d36552a8e8c1b37f26b9a02c31c5546b3dbecc6d89c547ec78d1516",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32968"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32967"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32964"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32966"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32965"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f05339805d36",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-942142": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "309473feabb5",
	                        "pause-942142"
	                    ],
	                    "NetworkID": "eb70cca5b08cbac86675674e217e1a9ed987f2b066a7cdfffc38ae2efe8409e3",
	                    "EndpointID": "77560130dd4bc75ee71659f96e31d8e0587e361c4efe1e988602596637e1ceda",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-942142 -n pause-942142
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-942142 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-942142 logs -n 25: (1.548045754s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-872088 sudo                 | cilium-872088             | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC |                     |
	|         | containerd config dump                |                           |         |         |                     |                     |
	| ssh     | -p cilium-872088 sudo                 | cilium-872088             | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC |                     |
	|         | systemctl status crio --all           |                           |         |         |                     |                     |
	|         | --full --no-pager                     |                           |         |         |                     |                     |
	| ssh     | -p cilium-872088 sudo                 | cilium-872088             | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC |                     |
	|         | systemctl cat crio --no-pager         |                           |         |         |                     |                     |
	| ssh     | -p cilium-872088 sudo find            | cilium-872088             | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC |                     |
	|         | /etc/crio -type f -exec sh -c         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-872088 sudo crio            | cilium-872088             | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC |                     |
	|         | config                                |                           |         |         |                     |                     |
	| delete  | -p cilium-872088                      | cilium-872088             | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC | 21 Aug 23 11:05 UTC |
	| start   | -p force-systemd-env-121880           | force-systemd-env-121880  | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC | 21 Aug 23 11:05 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p force-systemd-flag-577578          | force-systemd-flag-577578 | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC | 21 Aug 23 11:05 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p running-upgrade-619999             | running-upgrade-619999    | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-619999             | running-upgrade-619999    | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC | 21 Aug 23 11:05 UTC |
	| start   | -p cert-expiration-650157             | cert-expiration-650157    | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC | 21 Aug 23 11:05 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p missing-upgrade-586789             | missing-upgrade-586789    | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC | 21 Aug 23 11:06 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-121880           | force-systemd-env-121880  | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC | 21 Aug 23 11:05 UTC |
	| start   | -p cert-options-400386                | cert-options-400386       | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC | 21 Aug 23 11:05 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-577578 ssh cat     | force-systemd-flag-577578 | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC | 21 Aug 23 11:05 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-577578          | force-systemd-flag-577578 | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC | 21 Aug 23 11:05 UTC |
	| start   | -p pause-942142 --memory=2048         | pause-942142              | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC | 21 Aug 23 11:06 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=docker            |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-400386 ssh               | cert-options-400386       | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC | 21 Aug 23 11:05 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-400386 -- sudo        | cert-options-400386       | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC | 21 Aug 23 11:05 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-400386                | cert-options-400386       | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC | 21 Aug 23 11:06 UTC |
	| start   | -p kubernetes-upgrade-433377          | kubernetes-upgrade-433377 | jenkins | v1.31.2 | 21 Aug 23 11:06 UTC | 21 Aug 23 11:06 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p missing-upgrade-586789             | missing-upgrade-586789    | jenkins | v1.31.2 | 21 Aug 23 11:06 UTC | 21 Aug 23 11:06 UTC |
	| start   | -p pause-942142                       | pause-942142              | jenkins | v1.31.2 | 21 Aug 23 11:06 UTC | 21 Aug 23 11:07 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-433377          | kubernetes-upgrade-433377 | jenkins | v1.31.2 | 21 Aug 23 11:06 UTC | 21 Aug 23 11:06 UTC |
	| start   | -p kubernetes-upgrade-433377          | kubernetes-upgrade-433377 | jenkins | v1.31.2 | 21 Aug 23 11:06 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0-rc.1     |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/21 11:06:56
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0821 11:06:56.288788  194386 out.go:296] Setting OutFile to fd 1 ...
	I0821 11:06:56.288901  194386 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:06:56.288911  194386 out.go:309] Setting ErrFile to fd 2...
	I0821 11:06:56.288915  194386 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:06:56.289128  194386 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-5717/.minikube/bin
	I0821 11:06:56.289718  194386 out.go:303] Setting JSON to false
	I0821 11:06:56.291323  194386 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2966,"bootTime":1692613050,"procs":694,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0821 11:06:56.291417  194386 start.go:138] virtualization: kvm guest
	I0821 11:06:56.294147  194386 out.go:177] * [kubernetes-upgrade-433377] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0821 11:06:56.295775  194386 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 11:06:56.297215  194386 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 11:06:56.295847  194386 notify.go:220] Checking for updates...
	I0821 11:06:56.300944  194386 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17102-5717/kubeconfig
	I0821 11:06:56.302425  194386 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-5717/.minikube
	I0821 11:06:56.303793  194386 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0821 11:06:56.305070  194386 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 11:06:56.306654  194386 config.go:182] Loaded profile config "kubernetes-upgrade-433377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0821 11:06:56.307080  194386 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 11:06:56.329843  194386 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0821 11:06:56.329937  194386 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 11:06:56.387128  194386 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:79 OomKillDisable:true NGoroutines:82 SystemTime:2023-08-21 11:06:56.378036546 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0821 11:06:56.387244  194386 docker.go:294] overlay module found
	I0821 11:06:56.388917  194386 out.go:177] * Using the docker driver based on existing profile
	I0821 11:06:56.390162  194386 start.go:298] selected driver: docker
	I0821 11:06:56.390175  194386 start.go:902] validating driver "docker" against &{Name:kubernetes-upgrade-433377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-433377 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 11:06:56.390260  194386 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 11:06:56.391083  194386 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 11:06:56.445492  194386 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:79 OomKillDisable:true NGoroutines:82 SystemTime:2023-08-21 11:06:56.436315238 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0821 11:06:56.445831  194386 cni.go:84] Creating CNI manager for ""
	I0821 11:06:56.445850  194386 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0821 11:06:56.445861  194386 start_flags.go:319] config:
	{Name:kubernetes-upgrade-433377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.1 ClusterName:kubernetes-upgrade-433377 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0}
	I0821 11:06:56.447679  194386 out.go:177] * Starting control plane node kubernetes-upgrade-433377 in cluster kubernetes-upgrade-433377
	I0821 11:06:56.448916  194386 cache.go:122] Beginning downloading kic base image for docker with crio
	I0821 11:06:56.450171  194386 out.go:177] * Pulling base image ...
	I0821 11:06:56.451519  194386 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.1 and runtime crio
	I0821 11:06:56.451583  194386 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17102-5717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I0821 11:06:56.451610  194386 cache.go:57] Caching tarball of preloaded images
	I0821 11:06:56.451625  194386 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0821 11:06:56.451701  194386 preload.go:174] Found /home/jenkins/minikube-integration/17102-5717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0821 11:06:56.451716  194386 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0-rc.1 on crio
	I0821 11:06:56.451858  194386 profile.go:148] Saving config to /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/kubernetes-upgrade-433377/config.json ...
	I0821 11:06:56.469201  194386 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0821 11:06:56.469231  194386 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0821 11:06:56.469255  194386 cache.go:195] Successfully downloaded all kic artifacts
	I0821 11:06:56.469314  194386 start.go:365] acquiring machines lock for kubernetes-upgrade-433377: {Name:mk2296809b3f2eb8da8eba0f1ea9549353ccf3bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:06:56.469402  194386 start.go:369] acquired machines lock for "kubernetes-upgrade-433377" in 53.68µs
	I0821 11:06:56.469428  194386 start.go:96] Skipping create...Using existing machine configuration
	I0821 11:06:56.469442  194386 fix.go:54] fixHost starting: 
	I0821 11:06:56.469766  194386 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-433377 --format={{.State.Status}}
	I0821 11:06:56.486134  194386 fix.go:102] recreateIfNeeded on kubernetes-upgrade-433377: state=Stopped err=<nil>
	W0821 11:06:56.486167  194386 fix.go:128] unexpected machine state, will restart: <nil>
	I0821 11:06:56.488038  194386 out.go:177] * Restarting existing docker container for "kubernetes-upgrade-433377" ...
	I0821 11:06:54.659328  188079 pod_ready.go:102] pod "coredns-5d78c9869d-cw6gk" in "kube-system" namespace has status "Ready":"False"
	I0821 11:06:57.070002  188079 pod_ready.go:102] pod "coredns-5d78c9869d-cw6gk" in "kube-system" namespace has status "Ready":"False"
	I0821 11:06:56.489473  194386 cli_runner.go:164] Run: docker start kubernetes-upgrade-433377
	I0821 11:06:56.776478  194386 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-433377 --format={{.State.Status}}
	I0821 11:06:56.796212  194386 kic.go:426] container "kubernetes-upgrade-433377" state is running.
	I0821 11:06:56.796697  194386 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-433377
	I0821 11:06:56.815752  194386 profile.go:148] Saving config to /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/kubernetes-upgrade-433377/config.json ...
	I0821 11:06:56.816031  194386 machine.go:88] provisioning docker machine ...
	I0821 11:06:56.816061  194386 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-433377"
	I0821 11:06:56.816125  194386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-433377
	I0821 11:06:56.833482  194386 main.go:141] libmachine: Using SSH client type: native
	I0821 11:06:56.834091  194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 127.0.0.1 32981 <nil> <nil>}
	I0821 11:06:56.834110  194386 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-433377 && echo "kubernetes-upgrade-433377" | sudo tee /etc/hostname
	I0821 11:06:56.834755  194386 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52546->127.0.0.1:32981: read: connection reset by peer
	I0821 11:06:59.978285  194386 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-433377
	
	I0821 11:06:59.978377  194386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-433377
	I0821 11:06:59.996830  194386 main.go:141] libmachine: Using SSH client type: native
	I0821 11:06:59.997237  194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 127.0.0.1 32981 <nil> <nil>}
	I0821 11:06:59.997263  194386 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-433377' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-433377/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-433377' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0821 11:07:00.127678  194386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0821 11:07:00.127709  194386 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17102-5717/.minikube CaCertPath:/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17102-5717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17102-5717/.minikube}
	I0821 11:07:00.127732  194386 ubuntu.go:177] setting up certificates
	I0821 11:07:00.127752  194386 provision.go:83] configureAuth start
	I0821 11:07:00.127805  194386 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-433377
	I0821 11:07:00.145606  194386 provision.go:138] copyHostCerts
	I0821 11:07:00.145675  194386 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-5717/.minikube/key.pem, removing ...
	I0821 11:07:00.145695  194386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-5717/.minikube/key.pem
	I0821 11:07:00.145769  194386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17102-5717/.minikube/key.pem (1675 bytes)
	I0821 11:07:00.145889  194386 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-5717/.minikube/ca.pem, removing ...
	I0821 11:07:00.145901  194386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-5717/.minikube/ca.pem
	I0821 11:07:00.145937  194386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17102-5717/.minikube/ca.pem (1078 bytes)
	I0821 11:07:00.146024  194386 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-5717/.minikube/cert.pem, removing ...
	I0821 11:07:00.146034  194386 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-5717/.minikube/cert.pem
	I0821 11:07:00.146073  194386 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17102-5717/.minikube/cert.pem (1123 bytes)
	I0821 11:07:00.146159  194386 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17102-5717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-433377 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-433377]
	I0821 11:07:00.356462  194386 provision.go:172] copyRemoteCerts
	I0821 11:07:00.356545  194386 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0821 11:07:00.356592  194386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-433377
	I0821 11:07:00.376709  194386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32981 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/kubernetes-upgrade-433377/id_rsa Username:docker}
	I0821 11:07:00.472312  194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0821 11:07:00.495636  194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0821 11:07:00.517657  194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0821 11:07:00.539891  194386 provision.go:86] duration metric: configureAuth took 412.125455ms
	I0821 11:07:00.539913  194386 ubuntu.go:193] setting minikube options for container-runtime
	I0821 11:07:00.540075  194386 config.go:182] Loaded profile config "kubernetes-upgrade-433377": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0-rc.1
	I0821 11:07:00.540163  194386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-433377
	I0821 11:07:00.559595  194386 main.go:141] libmachine: Using SSH client type: native
	I0821 11:07:00.560205  194386 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 127.0.0.1 32981 <nil> <nil>}
	I0821 11:07:00.560233  194386 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0821 11:07:00.837484  194386 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0821 11:07:00.837528  194386 machine.go:91] provisioned docker machine in 4.021480148s
	I0821 11:07:00.837541  194386 start.go:300] post-start starting for "kubernetes-upgrade-433377" (driver="docker")
	I0821 11:07:00.837557  194386 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0821 11:07:00.837638  194386 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0821 11:07:00.837677  194386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-433377
	I0821 11:07:00.856482  194386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32981 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/kubernetes-upgrade-433377/id_rsa Username:docker}
	I0821 11:07:00.948024  194386 ssh_runner.go:195] Run: cat /etc/os-release
	I0821 11:07:00.951490  194386 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0821 11:07:00.951532  194386 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0821 11:07:00.951546  194386 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0821 11:07:00.951554  194386 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0821 11:07:00.951565  194386 filesync.go:126] Scanning /home/jenkins/minikube-integration/17102-5717/.minikube/addons for local assets ...
	I0821 11:07:00.951625  194386 filesync.go:126] Scanning /home/jenkins/minikube-integration/17102-5717/.minikube/files for local assets ...
	I0821 11:07:00.951729  194386 filesync.go:149] local asset: /home/jenkins/minikube-integration/17102-5717/.minikube/files/etc/ssl/certs/124602.pem -> 124602.pem in /etc/ssl/certs
	I0821 11:07:00.951850  194386 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0821 11:07:00.961432  194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/files/etc/ssl/certs/124602.pem --> /etc/ssl/certs/124602.pem (1708 bytes)
	I0821 11:07:00.985402  194386 start.go:303] post-start completed in 147.84441ms
	I0821 11:07:00.985499  194386 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0821 11:07:00.985544  194386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-433377
	I0821 11:07:01.002824  194386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32981 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/kubernetes-upgrade-433377/id_rsa Username:docker}
	I0821 11:07:01.092015  194386 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0821 11:07:01.096274  194386 fix.go:56] fixHost completed within 4.626829787s
	I0821 11:07:01.096295  194386 start.go:83] releasing machines lock for "kubernetes-upgrade-433377", held for 4.626879169s
	I0821 11:07:01.096358  194386 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-433377
	I0821 11:07:01.113228  194386 ssh_runner.go:195] Run: cat /version.json
	I0821 11:07:01.113269  194386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-433377
	I0821 11:07:01.113338  194386 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0821 11:07:01.113428  194386 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-433377
	I0821 11:07:01.131329  194386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32981 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/kubernetes-upgrade-433377/id_rsa Username:docker}
	I0821 11:07:01.131582  194386 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32981 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/kubernetes-upgrade-433377/id_rsa Username:docker}
	I0821 11:07:01.320695  194386 ssh_runner.go:195] Run: systemctl --version
	I0821 11:07:01.324779  194386 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0821 11:07:01.470135  194386 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0821 11:07:01.474465  194386 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0821 11:07:01.483424  194386 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0821 11:07:01.483506  194386 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0821 11:07:01.491401  194386 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0821 11:07:01.491426  194386 start.go:466] detecting cgroup driver to use...
	I0821 11:07:01.491460  194386 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0821 11:07:01.491508  194386 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0821 11:07:01.502121  194386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0821 11:07:01.512173  194386 docker.go:196] disabling cri-docker service (if available) ...
	I0821 11:07:01.512235  194386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0821 11:07:01.523669  194386 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0821 11:07:01.535046  194386 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0821 11:07:01.630824  194386 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0821 11:07:01.708403  194386 docker.go:212] disabling docker service ...
	I0821 11:07:01.708464  194386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0821 11:07:01.719840  194386 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0821 11:07:01.730756  194386 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0821 11:07:01.821821  194386 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0821 11:07:01.907873  194386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0821 11:07:01.918134  194386 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0821 11:07:01.933609  194386 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0821 11:07:01.933661  194386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 11:07:01.943122  194386 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0821 11:07:01.943171  194386 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 11:07:01.952554  194386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 11:07:01.961284  194386 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 11:07:01.970518  194386 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0821 11:07:01.979248  194386 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0821 11:07:01.987410  194386 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0821 11:07:01.995453  194386 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 11:07:02.073697  194386 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0821 11:07:02.847220  194386 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0821 11:07:02.847288  194386 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0821 11:07:02.850675  194386 start.go:534] Will wait 60s for crictl version
	I0821 11:07:02.850729  194386 ssh_runner.go:195] Run: which crictl
	I0821 11:07:02.853978  194386 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0821 11:07:02.889730  194386 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0821 11:07:02.889808  194386 ssh_runner.go:195] Run: crio --version
	I0821 11:07:02.924858  194386 ssh_runner.go:195] Run: crio --version
	I0821 11:07:02.964727  194386 out.go:177] * Preparing Kubernetes v1.28.0-rc.1 on CRI-O 1.24.6 ...
	I0821 11:06:59.569523  188079 pod_ready.go:102] pod "coredns-5d78c9869d-cw6gk" in "kube-system" namespace has status "Ready":"False"
	I0821 11:07:02.070307  188079 pod_ready.go:102] pod "coredns-5d78c9869d-cw6gk" in "kube-system" namespace has status "Ready":"False"
	I0821 11:07:04.069889  188079 pod_ready.go:92] pod "coredns-5d78c9869d-cw6gk" in "kube-system" namespace has status "Ready":"True"
	I0821 11:07:04.069911  188079 pod_ready.go:81] duration metric: took 13.516869313s waiting for pod "coredns-5d78c9869d-cw6gk" in "kube-system" namespace to be "Ready" ...
	I0821 11:07:04.069932  188079 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-942142" in "kube-system" namespace to be "Ready" ...
	I0821 11:07:04.074615  188079 pod_ready.go:92] pod "etcd-pause-942142" in "kube-system" namespace has status "Ready":"True"
	I0821 11:07:04.074633  188079 pod_ready.go:81] duration metric: took 4.695377ms waiting for pod "etcd-pause-942142" in "kube-system" namespace to be "Ready" ...
	I0821 11:07:04.074646  188079 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-942142" in "kube-system" namespace to be "Ready" ...
	I0821 11:07:04.080100  188079 pod_ready.go:92] pod "kube-apiserver-pause-942142" in "kube-system" namespace has status "Ready":"True"
	I0821 11:07:04.080128  188079 pod_ready.go:81] duration metric: took 5.474752ms waiting for pod "kube-apiserver-pause-942142" in "kube-system" namespace to be "Ready" ...
	I0821 11:07:04.080143  188079 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-942142" in "kube-system" namespace to be "Ready" ...
	I0821 11:07:04.085038  188079 pod_ready.go:92] pod "kube-controller-manager-pause-942142" in "kube-system" namespace has status "Ready":"True"
	I0821 11:07:04.085063  188079 pod_ready.go:81] duration metric: took 4.911727ms waiting for pod "kube-controller-manager-pause-942142" in "kube-system" namespace to be "Ready" ...
	I0821 11:07:04.085076  188079 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vbspt" in "kube-system" namespace to be "Ready" ...
	I0821 11:07:04.090042  188079 pod_ready.go:92] pod "kube-proxy-vbspt" in "kube-system" namespace has status "Ready":"True"
	I0821 11:07:04.090063  188079 pod_ready.go:81] duration metric: took 4.980637ms waiting for pod "kube-proxy-vbspt" in "kube-system" namespace to be "Ready" ...
	I0821 11:07:04.090074  188079 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-942142" in "kube-system" namespace to be "Ready" ...
	I0821 11:07:04.467663  188079 pod_ready.go:92] pod "kube-scheduler-pause-942142" in "kube-system" namespace has status "Ready":"True"
	I0821 11:07:04.467699  188079 pod_ready.go:81] duration metric: took 377.614764ms waiting for pod "kube-scheduler-pause-942142" in "kube-system" namespace to be "Ready" ...
	I0821 11:07:04.467711  188079 pod_ready.go:38] duration metric: took 15.938249122s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0821 11:07:04.467731  188079 api_server.go:52] waiting for apiserver process to appear ...
	I0821 11:07:04.467797  188079 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0821 11:07:04.478621  188079 api_server.go:72] duration metric: took 16.013450754s to wait for apiserver process to appear ...
	I0821 11:07:04.478649  188079 api_server.go:88] waiting for apiserver healthz status ...
	I0821 11:07:04.478671  188079 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0821 11:07:04.483399  188079 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0821 11:07:04.484477  188079 api_server.go:141] control plane version: v1.27.4
	I0821 11:07:04.484497  188079 api_server.go:131] duration metric: took 5.84138ms to wait for apiserver health ...
	I0821 11:07:04.484505  188079 system_pods.go:43] waiting for kube-system pods to appear ...
	I0821 11:07:04.669603  188079 system_pods.go:59] 8 kube-system pods found
	I0821 11:07:04.669633  188079 system_pods.go:61] "coredns-5d78c9869d-2fzvb" [d4ffa418-e75b-4b1b-a386-f376ea79a072] Running
	I0821 11:07:04.669637  188079 system_pods.go:61] "coredns-5d78c9869d-cw6gk" [f27ce3bb-203e-4d6b-988a-bdb76928ec2f] Running
	I0821 11:07:04.669642  188079 system_pods.go:61] "etcd-pause-942142" [de553f76-9b52-4880-8853-f7fbc1e46d1c] Running
	I0821 11:07:04.669647  188079 system_pods.go:61] "kindnet-qrlk5" [f0c3caaf-c929-49de-97b5-f2ad04d37a2c] Running
	I0821 11:07:04.669651  188079 system_pods.go:61] "kube-apiserver-pause-942142" [5f01265b-71bb-4303-b0e2-d395043684a8] Running
	I0821 11:07:04.669656  188079 system_pods.go:61] "kube-controller-manager-pause-942142" [9d526837-7c56-4997-98b1-41d391a8dbfe] Running
	I0821 11:07:04.669660  188079 system_pods.go:61] "kube-proxy-vbspt" [86d66575-f671-4128-a820-81d28df6b57b] Running
	I0821 11:07:04.669666  188079 system_pods.go:61] "kube-scheduler-pause-942142" [2f5fe7c1-dbff-4666-8372-54b630511290] Running
	I0821 11:07:04.669672  188079 system_pods.go:74] duration metric: took 185.163067ms to wait for pod list to return data ...
	I0821 11:07:04.669679  188079 default_sa.go:34] waiting for default service account to be created ...
	I0821 11:07:04.867263  188079 default_sa.go:45] found service account: "default"
	I0821 11:07:04.867289  188079 default_sa.go:55] duration metric: took 197.602175ms for default service account to be created ...
	I0821 11:07:04.867300  188079 system_pods.go:116] waiting for k8s-apps to be running ...
	I0821 11:07:05.072522  188079 system_pods.go:86] 8 kube-system pods found
	I0821 11:07:05.072561  188079 system_pods.go:89] "coredns-5d78c9869d-2fzvb" [d4ffa418-e75b-4b1b-a386-f376ea79a072] Running
	I0821 11:07:05.072570  188079 system_pods.go:89] "coredns-5d78c9869d-cw6gk" [f27ce3bb-203e-4d6b-988a-bdb76928ec2f] Running
	I0821 11:07:05.072577  188079 system_pods.go:89] "etcd-pause-942142" [de553f76-9b52-4880-8853-f7fbc1e46d1c] Running
	I0821 11:07:05.072585  188079 system_pods.go:89] "kindnet-qrlk5" [f0c3caaf-c929-49de-97b5-f2ad04d37a2c] Running
	I0821 11:07:05.072593  188079 system_pods.go:89] "kube-apiserver-pause-942142" [5f01265b-71bb-4303-b0e2-d395043684a8] Running
	I0821 11:07:05.072609  188079 system_pods.go:89] "kube-controller-manager-pause-942142" [9d526837-7c56-4997-98b1-41d391a8dbfe] Running
	I0821 11:07:05.072616  188079 system_pods.go:89] "kube-proxy-vbspt" [86d66575-f671-4128-a820-81d28df6b57b] Running
	I0821 11:07:05.072624  188079 system_pods.go:89] "kube-scheduler-pause-942142" [2f5fe7c1-dbff-4666-8372-54b630511290] Running
	I0821 11:07:05.072632  188079 system_pods.go:126] duration metric: took 205.326598ms to wait for k8s-apps to be running ...
	I0821 11:07:05.072641  188079 system_svc.go:44] waiting for kubelet service to be running ....
	I0821 11:07:05.072691  188079 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0821 11:07:05.091803  188079 system_svc.go:56] duration metric: took 19.150807ms WaitForService to wait for kubelet.
	I0821 11:07:05.091834  188079 kubeadm.go:581] duration metric: took 16.626667922s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0821 11:07:05.091856  188079 node_conditions.go:102] verifying NodePressure condition ...
	I0821 11:07:05.267303  188079 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0821 11:07:05.267326  188079 node_conditions.go:123] node cpu capacity is 8
	I0821 11:07:05.267336  188079 node_conditions.go:105] duration metric: took 175.47601ms to run NodePressure ...
	I0821 11:07:05.267345  188079 start.go:228] waiting for startup goroutines ...
	I0821 11:07:05.267372  188079 start.go:233] waiting for cluster config update ...
	I0821 11:07:05.267382  188079 start.go:242] writing updated cluster config ...
	I0821 11:07:05.267659  188079 ssh_runner.go:195] Run: rm -f paused
	I0821 11:07:05.353701  188079 start.go:600] kubectl: 1.28.0, cluster: 1.27.4 (minor skew: 1)
	I0821 11:07:05.362183  188079 out.go:177] * Done! kubectl is now configured to use "pause-942142" cluster and "default" namespace by default
	I0821 11:07:02.966259  194386 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-433377 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0821 11:07:02.984944  194386 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0821 11:07:02.988681  194386 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0821 11:07:02.998786  194386 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.1 and runtime crio
	I0821 11:07:02.998839  194386 ssh_runner.go:195] Run: sudo crictl images --output json
	I0821 11:07:03.041559  194386 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.0-rc.1". assuming images are not preloaded.
	I0821 11:07:03.041634  194386 ssh_runner.go:195] Run: which lz4
	I0821 11:07:03.044987  194386 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0821 11:07:03.048002  194386 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0821 11:07:03.048027  194386 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457054966 bytes)
	I0821 11:07:04.050104  194386 crio.go:444] Took 1.005158 seconds to copy over tarball
	I0821 11:07:04.050168  194386 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	
	* 
	* ==> CRI-O <==
	* Aug 21 11:06:44 pause-942142 crio[3206]: time="2023-08-21 11:06:44.185382128Z" level=info msg="Started container" PID=3714 containerID=54f4e9a4e3c929269d0b605a99f65c21935aacd534be322a7d20762aada66820 description=kube-system/kube-controller-manager-pause-942142/kube-controller-manager id=0d5a96ad-c065-4c9c-81de-b143afc83058 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0b55e173999efde3100ba7dc963e78fedd2d51bc883a80e7a6a6402334a74d92
	Aug 21 11:06:47 pause-942142 crio[3206]: time="2023-08-21 11:06:47.249249915Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Aug 21 11:06:47 pause-942142 crio[3206]: time="2023-08-21 11:06:47.255244744Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 21 11:06:47 pause-942142 crio[3206]: time="2023-08-21 11:06:47.255287363Z" level=info msg="Updated default CNI network name to kindnet"
	Aug 21 11:06:47 pause-942142 crio[3206]: time="2023-08-21 11:06:47.255304809Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Aug 21 11:06:47 pause-942142 crio[3206]: time="2023-08-21 11:06:47.263670352Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 21 11:06:47 pause-942142 crio[3206]: time="2023-08-21 11:06:47.263706205Z" level=info msg="Updated default CNI network name to kindnet"
	Aug 21 11:06:47 pause-942142 crio[3206]: time="2023-08-21 11:06:47.263723075Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Aug 21 11:06:47 pause-942142 crio[3206]: time="2023-08-21 11:06:47.339890699Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 21 11:06:47 pause-942142 crio[3206]: time="2023-08-21 11:06:47.339934926Z" level=info msg="Updated default CNI network name to kindnet"
	Aug 21 11:06:47 pause-942142 crio[3206]: time="2023-08-21 11:06:47.339958858Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Aug 21 11:06:47 pause-942142 crio[3206]: time="2023-08-21 11:06:47.344162198Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Aug 21 11:06:47 pause-942142 crio[3206]: time="2023-08-21 11:06:47.344199730Z" level=info msg="Updated default CNI network name to kindnet"
	Aug 21 11:07:02 pause-942142 crio[3206]: time="2023-08-21 11:07:02.454249526Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.10.1" id=9a346d35-0418-4357-808b-b1fcb56928de name=/runtime.v1.ImageService/ImageStatus
	Aug 21 11:07:02 pause-942142 crio[3206]: time="2023-08-21 11:07:02.454500303Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378],Size_:53621675,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=9a346d35-0418-4357-808b-b1fcb56928de name=/runtime.v1.ImageService/ImageStatus
	Aug 21 11:07:02 pause-942142 crio[3206]: time="2023-08-21 11:07:02.457971765Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.10.1" id=adeb9401-ff20-4477-9029-21f9e0aefab1 name=/runtime.v1.ImageService/ImageStatus
	Aug 21 11:07:02 pause-942142 crio[3206]: time="2023-08-21 11:07:02.458214516Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378],Size_:53621675,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=adeb9401-ff20-4477-9029-21f9e0aefab1 name=/runtime.v1.ImageService/ImageStatus
	Aug 21 11:07:02 pause-942142 crio[3206]: time="2023-08-21 11:07:02.459250133Z" level=info msg="Creating container: kube-system/coredns-5d78c9869d-cw6gk/coredns" id=f7affeba-4d5b-4b3a-8426-3b33b9b93088 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 21 11:07:02 pause-942142 crio[3206]: time="2023-08-21 11:07:02.459394283Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 21 11:07:02 pause-942142 crio[3206]: time="2023-08-21 11:07:02.663435417Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/473b96a41bf819b26ec93f1045995b3a135277bf23b2b8702667bee444ab6716/merged/etc/passwd: no such file or directory"
	Aug 21 11:07:02 pause-942142 crio[3206]: time="2023-08-21 11:07:02.663476167Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/473b96a41bf819b26ec93f1045995b3a135277bf23b2b8702667bee444ab6716/merged/etc/group: no such file or directory"
	Aug 21 11:07:02 pause-942142 crio[3206]: time="2023-08-21 11:07:02.797188589Z" level=info msg="Created container f835818a31b3eac4ddd96ea64c695523f75dd3f85fd579f47b4c2cb5ad6cf7bd: kube-system/coredns-5d78c9869d-cw6gk/coredns" id=f7affeba-4d5b-4b3a-8426-3b33b9b93088 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 21 11:07:02 pause-942142 crio[3206]: time="2023-08-21 11:07:02.797778298Z" level=info msg="Starting container: f835818a31b3eac4ddd96ea64c695523f75dd3f85fd579f47b4c2cb5ad6cf7bd" id=d8e2e341-267e-4f79-b263-eb846cb4fad0 name=/runtime.v1.RuntimeService/StartContainer
	Aug 21 11:07:02 pause-942142 crio[3206]: time="2023-08-21 11:07:02.807525152Z" level=info msg="Started container" PID=4096 containerID=f835818a31b3eac4ddd96ea64c695523f75dd3f85fd579f47b4c2cb5ad6cf7bd description=kube-system/coredns-5d78c9869d-cw6gk/coredns id=d8e2e341-267e-4f79-b263-eb846cb4fad0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=16e97ff277a28122017474cb4e6b067ec9cb646189d86462dd8f408a79f9a86b
	Aug 21 11:07:06 pause-942142 crio[3206]: time="2023-08-21 11:07:06.679099176Z" level=info msg="Stopping container: f835818a31b3eac4ddd96ea64c695523f75dd3f85fd579f47b4c2cb5ad6cf7bd (timeout: 30s)" id=e45c48f0-f790-438d-aab7-a2728c1c1510 name=/runtime.v1.RuntimeService/StopContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f835818a31b3e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   6 seconds ago       Running             coredns                   2                   16e97ff277a28       coredns-5d78c9869d-cw6gk
	54f4e9a4e3c92       f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5   25 seconds ago      Running             kube-controller-manager   2                   0b55e173999ef       kube-controller-manager-pause-942142
	6b469ef8743fe       e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c   25 seconds ago      Running             kube-apiserver            2                   e51e396e2e7d3       kube-apiserver-pause-942142
	e277e19aebf2c       98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16   25 seconds ago      Running             kube-scheduler            2                   69f36d87548b3       kube-scheduler-pause-942142
	e4f2bb098d137       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da   25 seconds ago      Running             kindnet-cni               2                   d9fcc0be76b03       kindnet-qrlk5
	6fd177b85deca       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681   25 seconds ago      Running             etcd                      2                   c045432d7748a       etcd-pause-942142
	7b334173daa92       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   25 seconds ago      Running             coredns                   2                   b4afa50bde8f0       coredns-5d78c9869d-2fzvb
	544c01d0a3442       6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4   25 seconds ago      Running             kube-proxy                2                   c2212720a9625       kube-proxy-vbspt
	327d0fb7dbbc5       e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c   37 seconds ago      Exited              kube-apiserver            1                   e51e396e2e7d3       kube-apiserver-pause-942142
	9581f7f8ccbc1       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   37 seconds ago      Exited              coredns                   1                   b4afa50bde8f0       coredns-5d78c9869d-2fzvb
	05325b9d295d4       f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5   37 seconds ago      Exited              kube-controller-manager   1                   0b55e173999ef       kube-controller-manager-pause-942142
	179d2ba79aac8       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   37 seconds ago      Exited              coredns                   1                   16e97ff277a28       coredns-5d78c9869d-cw6gk
	0828dc2ced588       6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4   37 seconds ago      Exited              kube-proxy                1                   c2212720a9625       kube-proxy-vbspt
	fbe17bd915d26       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da   37 seconds ago      Exited              kindnet-cni               1                   d9fcc0be76b03       kindnet-qrlk5
	31d22f771e570       98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16   37 seconds ago      Exited              kube-scheduler            1                   69f36d87548b3       kube-scheduler-pause-942142
	9e6691b39ff42       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681   37 seconds ago      Exited              etcd                      1                   c045432d7748a       etcd-pause-942142
	
	* 
	* ==> coredns [179d2ba79aac88d8d5a28cf3b3f3c792df2868d30b9d3e76e99879a3547221e3] <==
	* [INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:51750 - 8698 "HINFO IN 2352085751448041643.1116235000315882265. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.056761918s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> coredns [7b334173daa9205e3def69c0c9aa3fde5b5022a9f338e2895d0fcbf8ae76dc7e] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:42211 - 45657 "HINFO IN 4300767555842199613.3515570745801274251. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.045226569s
	
	* 
	* ==> coredns [9581f7f8ccbc19ef0ec935172c03cd5035fb7a36198d2f218807299959d3a846] <==
	* 
	* 
	* ==> coredns [f835818a31b3eac4ddd96ea64c695523f75dd3f85fd579f47b4c2cb5ad6cf7bd] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54775 - 1612 "HINFO IN 1639401811397350260.1326845661261012929. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.057596052s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-942142
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-942142
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43
	                    minikube.k8s.io/name=pause-942142
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_21T11_06_06_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 21 Aug 2023 11:06:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-942142
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 21 Aug 2023 11:07:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 21 Aug 2023 11:06:20 +0000   Mon, 21 Aug 2023 11:05:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 21 Aug 2023 11:06:20 +0000   Mon, 21 Aug 2023 11:05:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 21 Aug 2023 11:06:20 +0000   Mon, 21 Aug 2023 11:05:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 21 Aug 2023 11:06:20 +0000   Mon, 21 Aug 2023 11:06:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-942142
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	System Info:
	  Machine ID:                 0fc06879fd5d45638274c1305316b091
	  System UUID:                e347383a-6298-4d71-aa9f-53ae7e22a7ae
	  Boot ID:                    19bba9d5-fb53-4c36-8f17-b39d772f0931
	  Kernel Version:             5.15.0-1039-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-2fzvb                100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     51s
	  kube-system                 coredns-5d78c9869d-cw6gk                100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     51s
	  kube-system                 etcd-pause-942142                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         64s
	  kube-system                 kindnet-qrlk5                           100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      51s
	  kube-system                 kube-apiserver-pause-942142             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 kube-controller-manager-pause-942142    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 kube-proxy-vbspt                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	  kube-system                 kube-scheduler-pause-942142             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             290Mi (0%!)(MISSING)  390Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 49s   kube-proxy       
	  Normal  Starting                 21s   kube-proxy       
	  Normal  Starting                 64s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  64s   kubelet          Node pause-942142 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    64s   kubelet          Node pause-942142 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     64s   kubelet          Node pause-942142 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           52s   node-controller  Node pause-942142 event: Registered Node pause-942142 in Controller
	  Normal  NodeReady                49s   kubelet          Node pause-942142 status is now: NodeReady
	  Normal  RegisteredNode           11s   node-controller  Node pause-942142 event: Registered Node pause-942142 in Controller
	
	* 
	* ==> dmesg <==
	* [  +4.191597] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ede9cfb77cb9
	[  +0.000006] ll header: 00000000: 02 42 0a 3e a8 14 02 42 c0 a8 3a 02 08 00
	[  +8.191252] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-ede9cfb77cb9
	[  +0.000008] ll header: 00000000: 02 42 0a 3e a8 14 02 42 c0 a8 3a 02 08 00
	[Aug21 10:58] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-ede9cfb77cb9
	[  +0.000007] ll header: 00000000: 02 42 0a 3e a8 14 02 42 c0 a8 3a 02 08 00
	[  +1.016767] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-ede9cfb77cb9
	[  +0.000008] ll header: 00000000: 02 42 0a 3e a8 14 02 42 c0 a8 3a 02 08 00
	[  +2.015803] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-ede9cfb77cb9
	[  +0.000006] ll header: 00000000: 02 42 0a 3e a8 14 02 42 c0 a8 3a 02 08 00
	[  +4.031606] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-ede9cfb77cb9
	[  +0.000007] ll header: 00000000: 02 42 0a 3e a8 14 02 42 c0 a8 3a 02 08 00
	[  +8.191228] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-ede9cfb77cb9
	[  +0.000007] ll header: 00000000: 02 42 0a 3e a8 14 02 42 c0 a8 3a 02 08 00
	[Aug21 11:01] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-db3bed39ab33
	[  +0.000006] ll header: 00000000: 02 42 37 03 4b 8d 02 42 c0 a8 43 02 08 00
	[  +1.002962] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-db3bed39ab33
	[  +0.000006] ll header: 00000000: 02 42 37 03 4b 8d 02 42 c0 a8 43 02 08 00
	[  +2.015772] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-db3bed39ab33
	[  +0.000022] ll header: 00000000: 02 42 37 03 4b 8d 02 42 c0 a8 43 02 08 00
	[  +4.063611] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-db3bed39ab33
	[  +0.000006] ll header: 00000000: 02 42 37 03 4b 8d 02 42 c0 a8 43 02 08 00
	[  +8.191184] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-db3bed39ab33
	[  +0.000007] ll header: 00000000: 02 42 37 03 4b 8d 02 42 c0 a8 43 02 08 00
	[Aug21 11:04] process 'docker/tmp/qemu-check476447643/check' started with executable stack
	
	* 
	* ==> etcd [6fd177b85deca00ecd00526f81b113b77eab41cd0c23ca9ae8fc061ba5c61c5d] <==
	* {"level":"info","ts":"2023-08-21T11:06:44.242Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-08-21T11:06:44.248Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-08-21T11:06:45.881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2023-08-21T11:06:45.881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-08-21T11:06:45.881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2023-08-21T11:06:45.881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2023-08-21T11:06:45.881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-08-21T11:06:45.881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2023-08-21T11:06:45.881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-08-21T11:06:45.882Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-942142 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-21T11:06:45.882Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-21T11:06:45.882Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-21T11:06:45.882Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-21T11:06:45.882Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-08-21T11:06:45.883Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-21T11:06:45.883Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2023-08-21T11:06:53.725Z","caller":"traceutil/trace.go:171","msg":"trace[1699630513] transaction","detail":"{read_only:false; response_revision:417; number_of_response:1; }","duration":"118.671593ms","start":"2023-08-21T11:06:53.607Z","end":"2023-08-21T11:06:53.725Z","steps":["trace[1699630513] 'process raft request'  (duration: 118.522354ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-21T11:06:53.860Z","caller":"traceutil/trace.go:171","msg":"trace[2036191968] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"129.636554ms","start":"2023-08-21T11:06:53.730Z","end":"2023-08-21T11:06:53.860Z","steps":["trace[2036191968] 'process raft request'  (duration: 129.499422ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-21T11:06:54.060Z","caller":"traceutil/trace.go:171","msg":"trace[1078095373] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"195.870037ms","start":"2023-08-21T11:06:53.864Z","end":"2023-08-21T11:06:54.060Z","steps":["trace[1078095373] 'process raft request'  (duration: 119.434991ms)","trace[1078095373] 'compare'  (duration: 76.348735ms)"],"step_count":2}
	{"level":"info","ts":"2023-08-21T11:06:54.236Z","caller":"traceutil/trace.go:171","msg":"trace[615549974] transaction","detail":"{read_only:false; response_revision:421; number_of_response:1; }","duration":"110.442012ms","start":"2023-08-21T11:06:54.125Z","end":"2023-08-21T11:06:54.236Z","steps":["trace[615549974] 'process raft request'  (duration: 56.82652ms)","trace[615549974] 'compare'  (duration: 53.517724ms)"],"step_count":2}
	{"level":"warn","ts":"2023-08-21T11:06:54.515Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.918396ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kube-system/kube-apiserver-pause-942142.177d614550e7993d\" ","response":"range_response_count:1 size:781"}
	{"level":"info","ts":"2023-08-21T11:06:54.515Z","caller":"traceutil/trace.go:171","msg":"trace[2002545719] range","detail":"{range_begin:/registry/events/kube-system/kube-apiserver-pause-942142.177d614550e7993d; range_end:; response_count:1; response_revision:422; }","duration":"178.0095ms","start":"2023-08-21T11:06:54.337Z","end":"2023-08-21T11:06:54.515Z","steps":["trace[2002545719] 'range keys from in-memory index tree'  (duration: 177.808813ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-21T11:06:54.856Z","caller":"traceutil/trace.go:171","msg":"trace[1201774658] transaction","detail":"{read_only:false; response_revision:427; number_of_response:1; }","duration":"119.751483ms","start":"2023-08-21T11:06:54.736Z","end":"2023-08-21T11:06:54.856Z","steps":["trace[1201774658] 'process raft request'  (duration: 60.521186ms)","trace[1201774658] 'compare'  (duration: 59.118239ms)"],"step_count":2}
	{"level":"info","ts":"2023-08-21T11:06:55.067Z","caller":"traceutil/trace.go:171","msg":"trace[491422818] transaction","detail":"{read_only:false; response_revision:432; number_of_response:1; }","duration":"124.300399ms","start":"2023-08-21T11:06:54.943Z","end":"2023-08-21T11:06:55.067Z","steps":["trace[491422818] 'process raft request'  (duration: 65.843692ms)","trace[491422818] 'compare'  (duration: 58.329852ms)"],"step_count":2}
	{"level":"info","ts":"2023-08-21T11:06:55.525Z","caller":"traceutil/trace.go:171","msg":"trace[738112209] transaction","detail":"{read_only:false; response_revision:439; number_of_response:1; }","duration":"100.408215ms","start":"2023-08-21T11:06:55.425Z","end":"2023-08-21T11:06:55.525Z","steps":["trace[738112209] 'process raft request'  (duration: 100.28954ms)"],"step_count":1}
	
	* 
	* ==> etcd [9e6691b39ff4244c52bee2ef0e9a98f32e3aee856e59421957e8d0ce5ad633a5] <==
	* {"level":"info","ts":"2023-08-21T11:06:32.144Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"804.306µs"}
	{"level":"info","ts":"2023-08-21T11:06:32.147Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2023-08-21T11:06:32.151Z","caller":"etcdserver/raft.go:529","msg":"restarting local member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","commit-index":406}
	{"level":"info","ts":"2023-08-21T11:06:32.151Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=()"}
	{"level":"info","ts":"2023-08-21T11:06:32.152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became follower at term 2"}
	{"level":"info","ts":"2023-08-21T11:06:32.152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft ea7e25599daad906 [peers: [], term: 2, commit: 406, applied: 0, lastindex: 406, lastterm: 2]"}
	{"level":"warn","ts":"2023-08-21T11:06:32.156Z","caller":"auth/store.go:1234","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2023-08-21T11:06:32.159Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":391}
	{"level":"info","ts":"2023-08-21T11:06:32.162Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2023-08-21T11:06:32.169Z","caller":"etcdserver/corrupt.go:95","msg":"starting initial corruption check","local-member-id":"ea7e25599daad906","timeout":"7s"}
	{"level":"info","ts":"2023-08-21T11:06:32.238Z","caller":"etcdserver/corrupt.go:165","msg":"initial corruption checking passed; no corruption","local-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2023-08-21T11:06:32.238Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"ea7e25599daad906","local-server-version":"3.5.7","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-08-21T11:06:32.244Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-21T11:06:32.244Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-21T11:06:32.244Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-21T11:06:32.245Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-08-21T11:06:32.246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2023-08-21T11:06:32.246Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2023-08-21T11:06:32.246Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-08-21T11:06:32.247Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-08-21T11:06:32.247Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-08-21T11:06:32.246Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T11:06:32.247Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T11:06:32.246Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-08-21T11:06:32.248Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.76.2:2380"}
	
	* 
	* ==> kernel <==
	*  11:07:09 up 49 min,  0 users,  load average: 4.51, 3.27, 1.78
	Linux pause-942142 5.15.0-1039-gcp #47~20.04.1-Ubuntu SMP Thu Jul 27 22:40:03 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [e4f2bb098d13767e250b277c867cad4b943daf14274a6d06432efeadfcf1a383] <==
	* I0821 11:06:44.040329       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0821 11:06:44.040393       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0821 11:06:44.040594       1 main.go:116] setting mtu 1500 for CNI 
	I0821 11:06:44.040617       1 main.go:146] kindnetd IP family: "ipv4"
	I0821 11:06:44.040644       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0821 11:06:44.344031       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0821 11:06:44.344407       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0821 11:06:47.248934       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0821 11:06:47.248962       1 main.go:227] handling current node
	I0821 11:06:57.262464       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0821 11:06:57.262486       1 main.go:227] handling current node
	I0821 11:07:07.275301       1 main.go:223] Handling node with IPs: map[192.168.76.2:{}]
	I0821 11:07:07.275333       1 main.go:227] handling current node
	
	* 
	* ==> kindnet [fbe17bd915d26206d4070a22a06d468687f9e9ac3c757f3682e5252af105dba4] <==
	* I0821 11:06:32.160108       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0821 11:06:32.160994       1 main.go:107] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0821 11:06:32.161305       1 main.go:116] setting mtu 1500 for CNI 
	I0821 11:06:32.161380       1 main.go:146] kindnetd IP family: "ipv4"
	I0821 11:06:32.161463       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	
	* 
	* ==> kube-apiserver [327d0fb7dbbc50c0b1c66ffd711df33c2e0471a307c78992fe70be174ef461bd] <==
	* 
	* 
	* ==> kube-apiserver [6b469ef8743fed7a9528c3aeaf124bb3b503d6bd4d5d0d6cc1d0a5bd34f9cce3] <==
	* I0821 11:06:47.034084       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0821 11:06:47.034147       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0821 11:06:47.034186       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0821 11:06:47.033959       1 controller.go:85] Starting OpenAPI V3 controller
	I0821 11:06:47.034188       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0821 11:06:47.033973       1 naming_controller.go:291] Starting NamingConditionController
	E0821 11:06:47.163601       1 controller.go:155] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0821 11:06:47.165750       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0821 11:06:47.235992       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0821 11:06:47.236140       1 aggregator.go:152] initial CRD sync complete...
	I0821 11:06:47.236175       1 autoregister_controller.go:141] Starting autoregister controller
	I0821 11:06:47.236226       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0821 11:06:47.236258       1 cache.go:39] Caches are synced for autoregister controller
	I0821 11:06:47.236764       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0821 11:06:47.237282       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0821 11:06:47.237309       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0821 11:06:47.238874       1 shared_informer.go:318] Caches are synced for configmaps
	I0821 11:06:47.241445       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0821 11:06:47.242740       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0821 11:06:47.244373       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0821 11:06:47.806268       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0821 11:06:48.034313       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0821 11:06:58.452021       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0821 11:06:58.482655       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0821 11:06:58.497981       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [05325b9d295d4de54fc9eee30cdda179eed8761e957e48fc795f9c62de082588] <==
	* 
	* 
	* ==> kube-controller-manager [54f4e9a4e3c929269d0b605a99f65c21935aacd534be322a7d20762aada66820] <==
	* I0821 11:06:58.448266       1 shared_informer.go:318] Caches are synced for job
	I0821 11:06:58.448789       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0821 11:06:58.452168       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0821 11:06:58.453340       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0821 11:06:58.454504       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0821 11:06:58.456165       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0821 11:06:58.466108       1 shared_informer.go:318] Caches are synced for namespace
	I0821 11:06:58.469424       1 shared_informer.go:318] Caches are synced for deployment
	I0821 11:06:58.471701       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0821 11:06:58.475194       1 shared_informer.go:318] Caches are synced for daemon sets
	I0821 11:06:58.478768       1 shared_informer.go:318] Caches are synced for expand
	I0821 11:06:58.478850       1 shared_informer.go:318] Caches are synced for PV protection
	I0821 11:06:58.480968       1 shared_informer.go:318] Caches are synced for PVC protection
	I0821 11:06:58.487541       1 shared_informer.go:318] Caches are synced for GC
	I0821 11:06:58.487576       1 shared_informer.go:318] Caches are synced for crt configmap
	I0821 11:06:58.487549       1 shared_informer.go:318] Caches are synced for endpoint
	I0821 11:06:58.488004       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5d78c9869d to 1 from 2"
	I0821 11:06:58.489970       1 shared_informer.go:318] Caches are synced for disruption
	I0821 11:06:58.543938       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5d78c9869d-cw6gk"
	I0821 11:06:58.572694       1 shared_informer.go:318] Caches are synced for resource quota
	I0821 11:06:58.581742       1 shared_informer.go:318] Caches are synced for resource quota
	I0821 11:06:58.608631       1 shared_informer.go:318] Caches are synced for attach detach
	I0821 11:06:59.045506       1 shared_informer.go:318] Caches are synced for garbage collector
	I0821 11:06:59.045522       1 shared_informer.go:318] Caches are synced for garbage collector
	I0821 11:06:59.045561       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-proxy [0828dc2ced588a98499aeca666b45dadc7a01c60197d187af4790a70e968ac60] <==
	* 
	* 
	* ==> kube-proxy [544c01d0a34426cdfe78783ac00f0b6d4d22107641e52eee037f178973c5645a] <==
	* E0821 11:06:43.882378       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-942142": dial tcp 192.168.76.2:8443: connect: connection refused
	I0821 11:06:47.264415       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I0821 11:06:47.264542       1 server_others.go:110] "Detected node IP" address="192.168.76.2"
	I0821 11:06:47.267497       1 server_others.go:554] "Using iptables proxy"
	I0821 11:06:47.361529       1 server_others.go:192] "Using iptables Proxier"
	I0821 11:06:47.361576       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0821 11:06:47.361586       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0821 11:06:47.361602       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0821 11:06:47.361638       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0821 11:06:47.362392       1 server.go:658] "Version info" version="v1.27.4"
	I0821 11:06:47.362456       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0821 11:06:47.363091       1 config.go:315] "Starting node config controller"
	I0821 11:06:47.363094       1 config.go:188] "Starting service config controller"
	I0821 11:06:47.363119       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0821 11:06:47.363124       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0821 11:06:47.363286       1 config.go:97] "Starting endpoint slice config controller"
	I0821 11:06:47.363326       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0821 11:06:47.463272       1 shared_informer.go:318] Caches are synced for service config
	I0821 11:06:47.463342       1 shared_informer.go:318] Caches are synced for node config
	I0821 11:06:47.464955       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [31d22f771e57016d722ad1604c45f8d36af9b4ef37787395b579eff24cc280d8] <==
	* 
	* 
	* ==> kube-scheduler [e277e19aebf2cdda3d12e800f195c0ee4097fb33065a1da2b37b3671a74a7277] <==
	* I0821 11:06:44.885439       1 serving.go:348] Generated self-signed cert in-memory
	W0821 11:06:47.073615       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0821 11:06:47.073732       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0821 11:06:47.073771       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0821 11:06:47.073812       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0821 11:06:47.163992       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.4"
	I0821 11:06:47.164108       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0821 11:06:47.166733       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0821 11:06:47.235657       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0821 11:06:47.235757       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0821 11:06:47.235777       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0821 11:06:47.336631       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Aug 21 11:06:43 pause-942142 kubelet[1567]: I0821 11:06:43.962626    1567 scope.go:115] "RemoveContainer" containerID="d6e073418281aeac0de2fe83f3ec9c4628e6b21045b247d27d79e3932634a8c4"
	Aug 21 11:06:44 pause-942142 kubelet[1567]: I0821 11:06:44.053202    1567 scope.go:115] "RemoveContainer" containerID="e54946981e64d13a71635a59d35d5b82c350dcd4844893d298edf4d888a69179"
	Aug 21 11:06:44 pause-942142 kubelet[1567]: I0821 11:06:44.136859    1567 scope.go:115] "RemoveContainer" containerID="ef09b7eb4707d223ba1f602fb7fab739839118125d9ce921bed2bce1b9aa3b70"
	Aug 21 11:06:44 pause-942142 kubelet[1567]: I0821 11:06:44.582710    1567 scope.go:115] "RemoveContainer" containerID="179d2ba79aac88d8d5a28cf3b3f3c792df2868d30b9d3e76e99879a3547221e3"
	Aug 21 11:06:44 pause-942142 kubelet[1567]: E0821 11:06:44.583190    1567 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-5d78c9869d-cw6gk_kube-system(f27ce3bb-203e-4d6b-988a-bdb76928ec2f)\"" pod="kube-system/coredns-5d78c9869d-cw6gk" podUID=f27ce3bb-203e-4d6b-988a-bdb76928ec2f
	Aug 21 11:06:45 pause-942142 kubelet[1567]: W0821 11:06:45.607626    1567 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Aug 21 11:06:45 pause-942142 kubelet[1567]: W0821 11:06:45.613742    1567 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Aug 21 11:06:50 pause-942142 kubelet[1567]: I0821 11:06:50.434588    1567 scope.go:115] "RemoveContainer" containerID="179d2ba79aac88d8d5a28cf3b3f3c792df2868d30b9d3e76e99879a3547221e3"
	Aug 21 11:06:50 pause-942142 kubelet[1567]: E0821 11:06:50.434968    1567 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-5d78c9869d-cw6gk_kube-system(f27ce3bb-203e-4d6b-988a-bdb76928ec2f)\"" pod="kube-system/coredns-5d78c9869d-cw6gk" podUID=f27ce3bb-203e-4d6b-988a-bdb76928ec2f
	Aug 21 11:06:50 pause-942142 kubelet[1567]: I0821 11:06:50.601092    1567 scope.go:115] "RemoveContainer" containerID="179d2ba79aac88d8d5a28cf3b3f3c792df2868d30b9d3e76e99879a3547221e3"
	Aug 21 11:06:50 pause-942142 kubelet[1567]: E0821 11:06:50.601467    1567 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-5d78c9869d-cw6gk_kube-system(f27ce3bb-203e-4d6b-988a-bdb76928ec2f)\"" pod="kube-system/coredns-5d78c9869d-cw6gk" podUID=f27ce3bb-203e-4d6b-988a-bdb76928ec2f
	Aug 21 11:06:55 pause-942142 kubelet[1567]: W0821 11:06:55.630164    1567 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Aug 21 11:07:02 pause-942142 kubelet[1567]: I0821 11:07:02.453617    1567 scope.go:115] "RemoveContainer" containerID="179d2ba79aac88d8d5a28cf3b3f3c792df2868d30b9d3e76e99879a3547221e3"
	Aug 21 11:07:05 pause-942142 kubelet[1567]: E0821 11:07:05.365896    1567 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/e8d86164e46c218910618cfb6079ce6f6df4767a1a99ce732155f63740fe4e59/diff" to get inode usage: stat /var/lib/containers/storage/overlay/e8d86164e46c218910618cfb6079ce6f6df4767a1a99ce732155f63740fe4e59/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-scheduler-pause-942142_977f703dfad04e101172ea1685382344/kube-scheduler/0.log" to get inode usage: stat /var/log/pods/kube-system_kube-scheduler-pause-942142_977f703dfad04e101172ea1685382344/kube-scheduler/0.log: no such file or directory
	Aug 21 11:07:05 pause-942142 kubelet[1567]: E0821 11:07:05.373083    1567 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/e53dc8cc5c0793358e3b8054c350cc10e12f83e5db8f27da73ef23a864323520/diff" to get inode usage: stat /var/lib/containers/storage/overlay/e53dc8cc5c0793358e3b8054c350cc10e12f83e5db8f27da73ef23a864323520/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-controller-manager-pause-942142_fd1c8725478202c37350e553fb750bf5/kube-controller-manager/0.log" to get inode usage: stat /var/log/pods/kube-system_kube-controller-manager-pause-942142_fd1c8725478202c37350e553fb750bf5/kube-controller-manager/0.log: no such file or directory
	Aug 21 11:07:05 pause-942142 kubelet[1567]: E0821 11:07:05.437803    1567 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/448c888fcf25878990a92e930d693a9d7bdf214d6e806d52c1f7b9c9deccd474/diff" to get inode usage: stat /var/lib/containers/storage/overlay/448c888fcf25878990a92e930d693a9d7bdf214d6e806d52c1f7b9c9deccd474/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-apiserver-pause-942142_2ef6a030af3a3f686c6cbfa234b153b4/kube-apiserver/0.log" to get inode usage: stat /var/log/pods/kube-system_kube-apiserver-pause-942142_2ef6a030af3a3f686c6cbfa234b153b4/kube-apiserver/0.log: no such file or directory
	Aug 21 11:07:05 pause-942142 kubelet[1567]: E0821 11:07:05.457871    1567 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/38051ae2d5fba9c92400d0676ae4d9713940756f324eff1dc08975b5d9a32565/diff" to get inode usage: stat /var/lib/containers/storage/overlay/38051ae2d5fba9c92400d0676ae4d9713940756f324eff1dc08975b5d9a32565/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_etcd-pause-942142_c8274e3bf6266e5d0e1b49746e7f9a41/etcd/0.log" to get inode usage: stat /var/log/pods/kube-system_etcd-pause-942142_c8274e3bf6266e5d0e1b49746e7f9a41/etcd/0.log: no such file or directory
	Aug 21 11:07:05 pause-942142 kubelet[1567]: E0821 11:07:05.569905    1567 manager.go:1106] Failed to create existing container: /crio-b4afa50bde8f0e095b733d89c3550dc0990e2efeceaf419a0833e989ed50ba1b: Error finding container b4afa50bde8f0e095b733d89c3550dc0990e2efeceaf419a0833e989ed50ba1b: Status 404 returned error can't find the container with id b4afa50bde8f0e095b733d89c3550dc0990e2efeceaf419a0833e989ed50ba1b
	Aug 21 11:07:05 pause-942142 kubelet[1567]: E0821 11:07:05.583681    1567 manager.go:1106] Failed to create existing container: /crio-d9fcc0be76b0302ec0df69f463682ba3e7b96b716d8725ffe49b8b9987f906c6: Error finding container d9fcc0be76b0302ec0df69f463682ba3e7b96b716d8725ffe49b8b9987f906c6: Status 404 returned error can't find the container with id d9fcc0be76b0302ec0df69f463682ba3e7b96b716d8725ffe49b8b9987f906c6
	Aug 21 11:07:05 pause-942142 kubelet[1567]: E0821 11:07:05.587876    1567 manager.go:1106] Failed to create existing container: /docker/309473feabb5ddf8e1b31a85fe1fa26965aae266f6a8b3dfd5daa9ce19790bd4/crio-b4afa50bde8f0e095b733d89c3550dc0990e2efeceaf419a0833e989ed50ba1b: Error finding container b4afa50bde8f0e095b733d89c3550dc0990e2efeceaf419a0833e989ed50ba1b: Status 404 returned error can't find the container with id b4afa50bde8f0e095b733d89c3550dc0990e2efeceaf419a0833e989ed50ba1b
	Aug 21 11:07:05 pause-942142 kubelet[1567]: E0821 11:07:05.588614    1567 manager.go:1106] Failed to create existing container: /docker/309473feabb5ddf8e1b31a85fe1fa26965aae266f6a8b3dfd5daa9ce19790bd4/crio-c2212720a9625c1d07398062176961401d967df6b8e4dc2fbb6de8553c0ae60e: Error finding container c2212720a9625c1d07398062176961401d967df6b8e4dc2fbb6de8553c0ae60e: Status 404 returned error can't find the container with id c2212720a9625c1d07398062176961401d967df6b8e4dc2fbb6de8553c0ae60e
	Aug 21 11:07:05 pause-942142 kubelet[1567]: E0821 11:07:05.592309    1567 manager.go:1106] Failed to create existing container: /docker/309473feabb5ddf8e1b31a85fe1fa26965aae266f6a8b3dfd5daa9ce19790bd4/crio-16e97ff277a28122017474cb4e6b067ec9cb646189d86462dd8f408a79f9a86b: Error finding container 16e97ff277a28122017474cb4e6b067ec9cb646189d86462dd8f408a79f9a86b: Status 404 returned error can't find the container with id 16e97ff277a28122017474cb4e6b067ec9cb646189d86462dd8f408a79f9a86b
	Aug 21 11:07:05 pause-942142 kubelet[1567]: E0821 11:07:05.593070    1567 manager.go:1106] Failed to create existing container: /crio-16e97ff277a28122017474cb4e6b067ec9cb646189d86462dd8f408a79f9a86b: Error finding container 16e97ff277a28122017474cb4e6b067ec9cb646189d86462dd8f408a79f9a86b: Status 404 returned error can't find the container with id 16e97ff277a28122017474cb4e6b067ec9cb646189d86462dd8f408a79f9a86b
	Aug 21 11:07:05 pause-942142 kubelet[1567]: E0821 11:07:05.594589    1567 manager.go:1106] Failed to create existing container: /docker/309473feabb5ddf8e1b31a85fe1fa26965aae266f6a8b3dfd5daa9ce19790bd4/crio-d9fcc0be76b0302ec0df69f463682ba3e7b96b716d8725ffe49b8b9987f906c6: Error finding container d9fcc0be76b0302ec0df69f463682ba3e7b96b716d8725ffe49b8b9987f906c6: Status 404 returned error can't find the container with id d9fcc0be76b0302ec0df69f463682ba3e7b96b716d8725ffe49b8b9987f906c6
	Aug 21 11:07:05 pause-942142 kubelet[1567]: E0821 11:07:05.596148    1567 manager.go:1106] Failed to create existing container: /crio-c2212720a9625c1d07398062176961401d967df6b8e4dc2fbb6de8553c0ae60e: Error finding container c2212720a9625c1d07398062176961401d967df6b8e4dc2fbb6de8553c0ae60e: Status 404 returned error can't find the container with id c2212720a9625c1d07398062176961401d967df6b8e4dc2fbb6de8553c0ae60e
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-942142 -n pause-942142
helpers_test.go:261: (dbg) Run:  kubectl --context pause-942142 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (46.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (75.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.9.0.1758508462.exe start -p stopped-upgrade-212049 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.9.0.1758508462.exe start -p stopped-upgrade-212049 --memory=2200 --vm-driver=docker  --container-runtime=crio: (59.023230026s)
version_upgrade_test.go:204: (dbg) Run:  /tmp/minikube-v1.9.0.1758508462.exe -p stopped-upgrade-212049 stop
version_upgrade_test.go:204: (dbg) Done: /tmp/minikube-v1.9.0.1758508462.exe -p stopped-upgrade-212049 stop: (10.850203579s)
version_upgrade_test.go:210: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-212049 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:210: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-212049 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (5.438323945s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-212049] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17102
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17102-5717/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-5717/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.4
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-212049 in cluster stopped-upgrade-212049
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-212049" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 11:07:35.110628  203161 out.go:296] Setting OutFile to fd 1 ...
	I0821 11:07:35.110726  203161 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:07:35.110733  203161 out.go:309] Setting ErrFile to fd 2...
	I0821 11:07:35.110738  203161 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:07:35.110951  203161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-5717/.minikube/bin
	I0821 11:07:35.111528  203161 out.go:303] Setting JSON to false
	I0821 11:07:35.113188  203161 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3005,"bootTime":1692613050,"procs":1008,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0821 11:07:35.113255  203161 start.go:138] virtualization: kvm guest
	I0821 11:07:35.115506  203161 out.go:177] * [stopped-upgrade-212049] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0821 11:07:35.117188  203161 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 11:07:35.118422  203161 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 11:07:35.117193  203161 notify.go:220] Checking for updates...
	I0821 11:07:35.120966  203161 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17102-5717/kubeconfig
	I0821 11:07:35.122568  203161 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-5717/.minikube
	I0821 11:07:35.123964  203161 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0821 11:07:35.125494  203161 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 11:07:35.128391  203161 config.go:182] Loaded profile config "stopped-upgrade-212049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0821 11:07:35.128425  203161 start_flags.go:695] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0821 11:07:35.130376  203161 out.go:177] * Kubernetes 1.27.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.4
	I0821 11:07:35.131633  203161 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 11:07:35.158076  203161 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0821 11:07:35.158171  203161 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 11:07:35.225753  203161 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:66 SystemTime:2023-08-21 11:07:35.20595297 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0821 11:07:35.225880  203161 docker.go:294] overlay module found
	I0821 11:07:35.227844  203161 out.go:177] * Using the docker driver based on existing profile
	I0821 11:07:35.229080  203161 start.go:298] selected driver: docker
	I0821 11:07:35.229090  203161 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-212049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-212049 Namespace: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 11:07:35.229170  203161 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 11:07:35.229924  203161 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 11:07:35.296766  203161 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:66 SystemTime:2023-08-21 11:07:35.288467333 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0821 11:07:35.297061  203161 cni.go:84] Creating CNI manager for ""
	I0821 11:07:35.297078  203161 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0821 11:07:35.297085  203161 start_flags.go:319] config:
	{Name:stopped-upgrade-212049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-212049 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 11:07:35.299018  203161 out.go:177] * Starting control plane node stopped-upgrade-212049 in cluster stopped-upgrade-212049
	I0821 11:07:35.300367  203161 cache.go:122] Beginning downloading kic base image for docker with crio
	I0821 11:07:35.301842  203161 out.go:177] * Pulling base image ...
	I0821 11:07:35.303326  203161 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0821 11:07:35.303419  203161 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0821 11:07:35.320053  203161 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0821 11:07:35.320079  203161 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	W0821 11:07:35.337364  203161 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0821 11:07:35.337567  203161 profile.go:148] Saving config to /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/stopped-upgrade-212049/config.json ...
	I0821 11:07:35.337667  203161 cache.go:107] acquiring lock: {Name:mkf46660acdf7ff03e108bf1cf65b1fef438520b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:07:35.337717  203161 cache.go:107] acquiring lock: {Name:mkf62c953ab8ad47afd65d04bafeb9b4d807eee7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:07:35.337782  203161 cache.go:115] /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I0821 11:07:35.337684  203161 cache.go:107] acquiring lock: {Name:mk5348f13c23b9533a2e2ad38a7e985b30bc9819 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:07:35.337800  203161 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 156.695µs
	I0821 11:07:35.337815  203161 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I0821 11:07:35.337804  203161 cache.go:115] /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I0821 11:07:35.337834  203161 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 119.595µs
	I0821 11:07:35.337843  203161 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I0821 11:07:35.337818  203161 cache.go:107] acquiring lock: {Name:mk87ad3bdd226d03bb02cbfe19c98cb195db50d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:07:35.337867  203161 cache.go:195] Successfully downloaded all kic artifacts
	I0821 11:07:35.337867  203161 cache.go:107] acquiring lock: {Name:mk8fab502b40c040bfbe4c7347a87eb74f2172f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:07:35.337848  203161 cache.go:107] acquiring lock: {Name:mkd8ba3f69927e1e8ea102f808e07f6f57464583 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:07:35.337875  203161 cache.go:107] acquiring lock: {Name:mk90c4e563f6a7df67dc357b0dbdc42a5d1fe77c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:07:35.337905  203161 start.go:365] acquiring machines lock for stopped-upgrade-212049: {Name:mkb896922b597fdb25e6bc84831aab5142bd3475 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:07:35.337950  203161 cache.go:115] /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0821 11:07:35.337826  203161 cache.go:107] acquiring lock: {Name:mke72a81dd41e23a45d9a75f85e8ccd88500d8df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:07:35.337959  203161 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 151.292µs
	I0821 11:07:35.337971  203161 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0821 11:07:35.337970  203161 cache.go:115] /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I0821 11:07:35.337985  203161 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 127.126µs
	I0821 11:07:35.337994  203161 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I0821 11:07:35.338010  203161 cache.go:115] /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I0821 11:07:35.338019  203161 cache.go:115] /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I0821 11:07:35.338019  203161 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 179.214µs
	I0821 11:07:35.338030  203161 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I0821 11:07:35.338026  203161 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 267.758µs
	I0821 11:07:35.338023  203161 cache.go:115] /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I0821 11:07:35.338037  203161 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I0821 11:07:35.338032  203161 start.go:369] acquired machines lock for "stopped-upgrade-212049" in 112.802µs
	I0821 11:07:35.338044  203161 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 228.69µs
	I0821 11:07:35.338045  203161 cache.go:115] /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0821 11:07:35.338052  203161 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I0821 11:07:35.338053  203161 start.go:96] Skipping create...Using existing machine configuration
	I0821 11:07:35.338067  203161 fix.go:54] fixHost starting: m01
	I0821 11:07:35.338064  203161 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 381.94µs
	I0821 11:07:35.338078  203161 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17102-5717/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0821 11:07:35.338090  203161 cache.go:87] Successfully saved all images to host disk.
	I0821 11:07:35.338364  203161 cli_runner.go:164] Run: docker container inspect stopped-upgrade-212049 --format={{.State.Status}}
	I0821 11:07:35.354882  203161 fix.go:102] recreateIfNeeded on stopped-upgrade-212049: state=Stopped err=<nil>
	W0821 11:07:35.354913  203161 fix.go:128] unexpected machine state, will restart: <nil>
	I0821 11:07:35.356674  203161 out.go:177] * Restarting existing docker container for "stopped-upgrade-212049" ...
	I0821 11:07:35.358179  203161 cli_runner.go:164] Run: docker start stopped-upgrade-212049
	I0821 11:07:35.618236  203161 cli_runner.go:164] Run: docker container inspect stopped-upgrade-212049 --format={{.State.Status}}
	I0821 11:07:35.635674  203161 kic.go:426] container "stopped-upgrade-212049" state is running.
	I0821 11:07:35.636185  203161 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-212049
	I0821 11:07:35.653952  203161 profile.go:148] Saving config to /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/stopped-upgrade-212049/config.json ...
	I0821 11:07:35.654207  203161 machine.go:88] provisioning docker machine ...
	I0821 11:07:35.654237  203161 ubuntu.go:169] provisioning hostname "stopped-upgrade-212049"
	I0821 11:07:35.654292  203161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-212049
	I0821 11:07:35.671658  203161 main.go:141] libmachine: Using SSH client type: native
	I0821 11:07:35.672178  203161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 127.0.0.1 32989 <nil> <nil>}
	I0821 11:07:35.672195  203161 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-212049 && echo "stopped-upgrade-212049" | sudo tee /etc/hostname
	I0821 11:07:35.672760  203161 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51762->127.0.0.1:32989: read: connection reset by peer
	I0821 11:07:38.791693  203161 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-212049
	
	I0821 11:07:38.791782  203161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-212049
	I0821 11:07:38.809642  203161 main.go:141] libmachine: Using SSH client type: native
	I0821 11:07:38.810078  203161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 127.0.0.1 32989 <nil> <nil>}
	I0821 11:07:38.810097  203161 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-212049' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-212049/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-212049' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0821 11:07:38.915061  203161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0821 11:07:38.915098  203161 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17102-5717/.minikube CaCertPath:/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17102-5717/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17102-5717/.minikube}
	I0821 11:07:38.915123  203161 ubuntu.go:177] setting up certificates
	I0821 11:07:38.915133  203161 provision.go:83] configureAuth start
	I0821 11:07:38.915197  203161 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-212049
	I0821 11:07:38.931943  203161 provision.go:138] copyHostCerts
	I0821 11:07:38.932009  203161 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-5717/.minikube/key.pem, removing ...
	I0821 11:07:38.932027  203161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-5717/.minikube/key.pem
	I0821 11:07:38.932122  203161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17102-5717/.minikube/key.pem (1675 bytes)
	I0821 11:07:38.932269  203161 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-5717/.minikube/ca.pem, removing ...
	I0821 11:07:38.932281  203161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-5717/.minikube/ca.pem
	I0821 11:07:38.932324  203161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17102-5717/.minikube/ca.pem (1078 bytes)
	I0821 11:07:38.932426  203161 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-5717/.minikube/cert.pem, removing ...
	I0821 11:07:38.932438  203161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-5717/.minikube/cert.pem
	I0821 11:07:38.932471  203161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-5717/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17102-5717/.minikube/cert.pem (1123 bytes)
	I0821 11:07:38.932558  203161 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17102-5717/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-212049 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-212049]
	I0821 11:07:39.038510  203161 provision.go:172] copyRemoteCerts
	I0821 11:07:39.038568  203161 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0821 11:07:39.038608  203161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-212049
	I0821 11:07:39.056264  203161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32989 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/stopped-upgrade-212049/id_rsa Username:docker}
	I0821 11:07:39.138424  203161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0821 11:07:39.154974  203161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0821 11:07:39.170895  203161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0821 11:07:39.187176  203161 provision.go:86] duration metric: configureAuth took 272.032796ms
	I0821 11:07:39.187199  203161 ubuntu.go:193] setting minikube options for container-runtime
	I0821 11:07:39.187405  203161 config.go:182] Loaded profile config "stopped-upgrade-212049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0821 11:07:39.187508  203161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-212049
	I0821 11:07:39.204308  203161 main.go:141] libmachine: Using SSH client type: native
	I0821 11:07:39.204874  203161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80fd20] 0x812dc0 <nil>  [] 0s} 127.0.0.1 32989 <nil> <nil>}
	I0821 11:07:39.204895  203161 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0821 11:07:39.734875  203161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0821 11:07:39.734905  203161 machine.go:91] provisioned docker machine in 4.080682015s
	I0821 11:07:39.734917  203161 start.go:300] post-start starting for "stopped-upgrade-212049" (driver="docker")
	I0821 11:07:39.734929  203161 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0821 11:07:39.735003  203161 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0821 11:07:39.735052  203161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-212049
	I0821 11:07:39.752661  203161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32989 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/stopped-upgrade-212049/id_rsa Username:docker}
	I0821 11:07:39.834321  203161 ssh_runner.go:195] Run: cat /etc/os-release
	I0821 11:07:39.837177  203161 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0821 11:07:39.837199  203161 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0821 11:07:39.837207  203161 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0821 11:07:39.837213  203161 info.go:137] Remote host: Ubuntu 19.10
	I0821 11:07:39.837222  203161 filesync.go:126] Scanning /home/jenkins/minikube-integration/17102-5717/.minikube/addons for local assets ...
	I0821 11:07:39.837271  203161 filesync.go:126] Scanning /home/jenkins/minikube-integration/17102-5717/.minikube/files for local assets ...
	I0821 11:07:39.837344  203161 filesync.go:149] local asset: /home/jenkins/minikube-integration/17102-5717/.minikube/files/etc/ssl/certs/124602.pem -> 124602.pem in /etc/ssl/certs
	I0821 11:07:39.837435  203161 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0821 11:07:39.843840  203161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-5717/.minikube/files/etc/ssl/certs/124602.pem --> /etc/ssl/certs/124602.pem (1708 bytes)
	I0821 11:07:39.860714  203161 start.go:303] post-start completed in 125.781142ms
	I0821 11:07:39.860794  203161 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0821 11:07:39.860837  203161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-212049
	I0821 11:07:39.877411  203161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32989 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/stopped-upgrade-212049/id_rsa Username:docker}
	I0821 11:07:39.955759  203161 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0821 11:07:39.959654  203161 fix.go:56] fixHost completed within 4.621586581s
	I0821 11:07:39.959678  203161 start.go:83] releasing machines lock for "stopped-upgrade-212049", held for 4.621630027s
	I0821 11:07:39.959735  203161 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-212049
	I0821 11:07:39.976720  203161 ssh_runner.go:195] Run: cat /version.json
	I0821 11:07:39.976780  203161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-212049
	I0821 11:07:39.976836  203161 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0821 11:07:39.976892  203161 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-212049
	I0821 11:07:39.996351  203161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32989 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/stopped-upgrade-212049/id_rsa Username:docker}
	I0821 11:07:39.997432  203161 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32989 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/stopped-upgrade-212049/id_rsa Username:docker}
	W0821 11:07:40.070433  203161 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0821 11:07:40.070523  203161 ssh_runner.go:195] Run: systemctl --version
	I0821 11:07:40.103539  203161 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0821 11:07:40.156239  203161 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0821 11:07:40.160493  203161 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0821 11:07:40.174510  203161 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0821 11:07:40.174584  203161 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0821 11:07:40.196496  203161 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0821 11:07:40.196519  203161 start.go:466] detecting cgroup driver to use...
	I0821 11:07:40.196551  203161 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0821 11:07:40.196600  203161 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0821 11:07:40.215950  203161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0821 11:07:40.224260  203161 docker.go:196] disabling cri-docker service (if available) ...
	I0821 11:07:40.224311  203161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0821 11:07:40.232741  203161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0821 11:07:40.240925  203161 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0821 11:07:40.249563  203161 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0821 11:07:40.249623  203161 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0821 11:07:40.327435  203161 docker.go:212] disabling docker service ...
	I0821 11:07:40.327495  203161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0821 11:07:40.337178  203161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0821 11:07:40.346158  203161 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0821 11:07:40.409759  203161 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0821 11:07:40.473583  203161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0821 11:07:40.482478  203161 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0821 11:07:40.494233  203161 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0821 11:07:40.494292  203161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 11:07:40.504039  203161 out.go:177] 
	W0821 11:07:40.505592  203161 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0821 11:07:40.505611  203161 out.go:239] * 
	* 
	W0821 11:07:40.506428  203161 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0821 11:07:40.507527  203161 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:212: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-212049 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (75.32s)

                                                
                                    

Test pass (270/304)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 7.77
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.06
10 TestDownloadOnly/v1.27.4/json-events 8.34
11 TestDownloadOnly/v1.27.4/preload-exists 0
15 TestDownloadOnly/v1.27.4/LogsDuration 0.06
17 TestDownloadOnly/v1.28.0-rc.1/json-events 9.72
18 TestDownloadOnly/v1.28.0-rc.1/preload-exists 0
22 TestDownloadOnly/v1.28.0-rc.1/LogsDuration 0.06
23 TestDownloadOnly/DeleteAll 0.19
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.12
25 TestDownloadOnlyKic 1.15
26 TestBinaryMirror 0.68
27 TestOffline 80.63
29 TestAddons/Setup 124.54
31 TestAddons/parallel/Registry 13.87
33 TestAddons/parallel/InspektorGadget 10.81
34 TestAddons/parallel/MetricsServer 6.11
35 TestAddons/parallel/HelmTiller 9.77
37 TestAddons/parallel/CSI 77.71
38 TestAddons/parallel/Headlamp 13.03
39 TestAddons/parallel/CloudSpanner 5.85
42 TestAddons/serial/GCPAuth/Namespaces 0.11
43 TestAddons/StoppedEnableDisable 12.13
44 TestCertOptions 29.43
45 TestCertExpiration 234.91
47 TestForceSystemdFlag 33.86
48 TestForceSystemdEnv 29.39
50 TestKVMDriverInstallOrUpdate 3.3
54 TestErrorSpam/setup 23.07
55 TestErrorSpam/start 0.57
56 TestErrorSpam/status 0.82
57 TestErrorSpam/pause 1.44
58 TestErrorSpam/unpause 1.43
59 TestErrorSpam/stop 1.35
62 TestFunctional/serial/CopySyncFile 0
63 TestFunctional/serial/StartWithProxy 66.68
64 TestFunctional/serial/AuditLog 0
65 TestFunctional/serial/SoftStart 44.26
66 TestFunctional/serial/KubeContext 0.04
67 TestFunctional/serial/KubectlGetPods 0.07
70 TestFunctional/serial/CacheCmd/cache/add_remote 2.64
71 TestFunctional/serial/CacheCmd/cache/add_local 1.22
72 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
73 TestFunctional/serial/CacheCmd/cache/list 0.04
74 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.26
75 TestFunctional/serial/CacheCmd/cache/cache_reload 1.56
76 TestFunctional/serial/CacheCmd/cache/delete 0.08
77 TestFunctional/serial/MinikubeKubectlCmd 0.1
78 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
79 TestFunctional/serial/ExtraConfig 27.73
80 TestFunctional/serial/ComponentHealth 0.07
81 TestFunctional/serial/LogsCmd 1.29
82 TestFunctional/serial/LogsFileCmd 1.3
83 TestFunctional/serial/InvalidService 4.63
85 TestFunctional/parallel/ConfigCmd 0.3
86 TestFunctional/parallel/DashboardCmd 21.65
87 TestFunctional/parallel/DryRun 0.53
88 TestFunctional/parallel/InternationalLanguage 0.17
89 TestFunctional/parallel/StatusCmd 1.95
93 TestFunctional/parallel/ServiceCmdConnect 11.74
94 TestFunctional/parallel/AddonsCmd 0.13
95 TestFunctional/parallel/PersistentVolumeClaim 27.35
97 TestFunctional/parallel/SSHCmd 0.47
98 TestFunctional/parallel/CpCmd 1.15
99 TestFunctional/parallel/MySQL 23.76
100 TestFunctional/parallel/FileSync 0.35
101 TestFunctional/parallel/CertSync 1.83
105 TestFunctional/parallel/NodeLabels 0.07
107 TestFunctional/parallel/NonActiveRuntimeDisabled 0.59
109 TestFunctional/parallel/License 0.16
110 TestFunctional/parallel/Version/short 0.05
111 TestFunctional/parallel/Version/components 0.47
112 TestFunctional/parallel/ImageCommands/ImageListShort 0.2
113 TestFunctional/parallel/ImageCommands/ImageListTable 0.2
114 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
115 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
116 TestFunctional/parallel/ImageCommands/ImageBuild 1.6
117 TestFunctional/parallel/ImageCommands/Setup 0.95
118 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
119 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
120 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
122 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
123 TestFunctional/parallel/MountCmd/any-port 17.08
124 TestFunctional/parallel/ProfileCmd/profile_list 0.35
125 TestFunctional/parallel/ProfileCmd/profile_json_output 0.33
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.99
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.8
128 TestFunctional/parallel/MountCmd/specific-port 1.98
129 TestFunctional/parallel/MountCmd/VerifyCleanup 2.04
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.76
132 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.41
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.33
136 TestFunctional/parallel/ImageCommands/ImageRemove 0.46
137 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.06
138 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.5
139 TestFunctional/parallel/ServiceCmd/DeployApp 9.2
140 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
141 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
145 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
146 TestFunctional/parallel/ServiceCmd/List 1.68
147 TestFunctional/parallel/ServiceCmd/JSONOutput 1.67
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.49
149 TestFunctional/parallel/ServiceCmd/Format 0.49
150 TestFunctional/parallel/ServiceCmd/URL 0.54
151 TestFunctional/delete_addon-resizer_images 0.07
152 TestFunctional/delete_my-image_image 0.01
153 TestFunctional/delete_minikube_cached_images 0.02
157 TestIngressAddonLegacy/StartLegacyK8sCluster 68.15
159 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 10.24
160 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.52
164 TestJSONOutput/start/Command 67.76
165 TestJSONOutput/start/Audit 0
167 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
168 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
170 TestJSONOutput/pause/Command 0.64
171 TestJSONOutput/pause/Audit 0
173 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
176 TestJSONOutput/unpause/Command 0.56
177 TestJSONOutput/unpause/Audit 0
179 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
182 TestJSONOutput/stop/Command 5.72
183 TestJSONOutput/stop/Audit 0
185 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
187 TestErrorJSONOutput 0.18
189 TestKicCustomNetwork/create_custom_network 31.14
190 TestKicCustomNetwork/use_default_bridge_network 24.24
191 TestKicExistingNetwork 27.08
192 TestKicCustomSubnet 24.26
193 TestKicStaticIP 23.49
194 TestMainNoArgs 0.04
195 TestMinikubeProfile 52.71
198 TestMountStart/serial/StartWithMountFirst 7.97
199 TestMountStart/serial/VerifyMountFirst 0.24
200 TestMountStart/serial/StartWithMountSecond 5.12
201 TestMountStart/serial/VerifyMountSecond 0.24
202 TestMountStart/serial/DeleteFirst 1.61
203 TestMountStart/serial/VerifyMountPostDelete 0.23
204 TestMountStart/serial/Stop 1.17
205 TestMountStart/serial/RestartStopped 6.95
206 TestMountStart/serial/VerifyMountPostStop 0.24
209 TestMultiNode/serial/FreshStart2Nodes 97.49
210 TestMultiNode/serial/DeployApp2Nodes 3.52
212 TestMultiNode/serial/AddNode 50.35
213 TestMultiNode/serial/ProfileList 0.25
214 TestMultiNode/serial/CopyFile 8.49
215 TestMultiNode/serial/StopNode 2.05
216 TestMultiNode/serial/StartAfterStop 10.48
217 TestMultiNode/serial/RestartKeepsNodes 111.82
218 TestMultiNode/serial/DeleteNode 4.56
219 TestMultiNode/serial/StopMultiNode 23.75
220 TestMultiNode/serial/RestartMultiNode 76.25
221 TestMultiNode/serial/ValidateNameConflict 22.97
226 TestPreload 143.2
228 TestScheduledStopUnix 99.8
231 TestInsufficientStorage 9.81
234 TestKubernetesUpgrade 356.27
235 TestMissingContainerUpgrade 162.86
237 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
238 TestNoKubernetes/serial/StartWithK8s 37.62
239 TestNoKubernetes/serial/StartWithStopK8s 9.28
240 TestNoKubernetes/serial/Start 6.58
241 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
242 TestNoKubernetes/serial/ProfileList 1.41
243 TestNoKubernetes/serial/Stop 1.43
244 TestNoKubernetes/serial/StartNoArgs 9.54
245 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
253 TestNetworkPlugins/group/false 3.44
258 TestPause/serial/Start 45.77
267 TestStoppedBinaryUpgrade/Setup 0.4
269 TestNetworkPlugins/group/auto/Start 69.31
270 TestStoppedBinaryUpgrade/MinikubeLogs 0.49
271 TestNetworkPlugins/group/flannel/Start 56.22
272 TestNetworkPlugins/group/auto/KubeletFlags 0.28
273 TestNetworkPlugins/group/auto/NetCatPod 9.35
274 TestNetworkPlugins/group/auto/DNS 0.16
275 TestNetworkPlugins/group/auto/Localhost 0.13
276 TestNetworkPlugins/group/auto/HairPin 0.13
277 TestNetworkPlugins/group/flannel/ControllerPod 5.02
278 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
279 TestNetworkPlugins/group/flannel/NetCatPod 9.34
280 TestNetworkPlugins/group/enable-default-cni/Start 42.34
281 TestNetworkPlugins/group/flannel/DNS 0.17
282 TestNetworkPlugins/group/flannel/Localhost 0.14
283 TestNetworkPlugins/group/flannel/HairPin 0.16
284 TestNetworkPlugins/group/kindnet/Start 71.99
285 TestNetworkPlugins/group/bridge/Start 34.17
286 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
287 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.34
288 TestNetworkPlugins/group/enable-default-cni/DNS 21.7
289 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
290 TestNetworkPlugins/group/bridge/NetCatPod 10.27
291 TestNetworkPlugins/group/bridge/DNS 0.16
292 TestNetworkPlugins/group/bridge/Localhost 0.14
293 TestNetworkPlugins/group/bridge/HairPin 0.15
294 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
295 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
296 TestNetworkPlugins/group/calico/Start 64.86
297 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
298 TestNetworkPlugins/group/kindnet/KubeletFlags 0.48
299 TestNetworkPlugins/group/kindnet/NetCatPod 10.45
300 TestNetworkPlugins/group/custom-flannel/Start 61.92
301 TestNetworkPlugins/group/kindnet/DNS 0.17
302 TestNetworkPlugins/group/kindnet/Localhost 0.14
303 TestNetworkPlugins/group/kindnet/HairPin 0.14
305 TestStartStop/group/old-k8s-version/serial/FirstStart 135.25
306 TestNetworkPlugins/group/calico/ControllerPod 5.02
307 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
308 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.3
309 TestNetworkPlugins/group/calico/KubeletFlags 0.32
310 TestNetworkPlugins/group/calico/NetCatPod 12.02
311 TestNetworkPlugins/group/custom-flannel/DNS 0.18
312 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
313 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
314 TestNetworkPlugins/group/calico/DNS 0.15
315 TestNetworkPlugins/group/calico/Localhost 0.15
316 TestNetworkPlugins/group/calico/HairPin 0.14
318 TestStartStop/group/no-preload/serial/FirstStart 66.12
320 TestStartStop/group/embed-certs/serial/FirstStart 71.02
322 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 42.42
323 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.4
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.95
325 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.32
326 TestStartStop/group/no-preload/serial/DeployApp 8.37
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.17
328 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 338.5
329 TestStartStop/group/old-k8s-version/serial/DeployApp 8.4
330 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.09
331 TestStartStop/group/embed-certs/serial/DeployApp 7.51
332 TestStartStop/group/no-preload/serial/Stop 11.98
333 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.77
334 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.04
335 TestStartStop/group/old-k8s-version/serial/Stop 11.92
336 TestStartStop/group/embed-certs/serial/Stop 11.94
337 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
338 TestStartStop/group/no-preload/serial/SecondStart 337.34
339 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
340 TestStartStop/group/old-k8s-version/serial/SecondStart 66.69
341 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
342 TestStartStop/group/embed-certs/serial/SecondStart 339.37
343 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
344 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
345 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.31
346 TestStartStop/group/old-k8s-version/serial/Pause 2.65
348 TestStartStop/group/newest-cni/serial/FirstStart 35.69
349 TestStartStop/group/newest-cni/serial/DeployApp 0
350 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.87
351 TestStartStop/group/newest-cni/serial/Stop 1.2
352 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
353 TestStartStop/group/newest-cni/serial/SecondStart 25.72
354 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
355 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
356 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.32
357 TestStartStop/group/newest-cni/serial/Pause 2.4
358 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 10.02
359 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
360 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.34
361 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.89
362 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 12.02
363 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 9.02
364 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
365 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
366 TestStartStop/group/no-preload/serial/Pause 2.54
367 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
368 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
369 TestStartStop/group/embed-certs/serial/Pause 2.54
x
+
TestDownloadOnly/v1.16.0/json-events (7.77s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-866840 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-866840 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (7.77192016s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (7.77s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-866840
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-866840: exit status 85 (58.623071ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-866840 | jenkins | v1.31.2 | 21 Aug 23 10:33 UTC |          |
	|         | -p download-only-866840        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/21 10:33:20
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0821 10:33:20.445168   12471 out.go:296] Setting OutFile to fd 1 ...
	I0821 10:33:20.445297   12471 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 10:33:20.445306   12471 out.go:309] Setting ErrFile to fd 2...
	I0821 10:33:20.445310   12471 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 10:33:20.445505   12471 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-5717/.minikube/bin
	W0821 10:33:20.445608   12471 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17102-5717/.minikube/config/config.json: open /home/jenkins/minikube-integration/17102-5717/.minikube/config/config.json: no such file or directory
	I0821 10:33:20.446133   12471 out.go:303] Setting JSON to true
	I0821 10:33:20.446887   12471 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":950,"bootTime":1692613050,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0821 10:33:20.446936   12471 start.go:138] virtualization: kvm guest
	I0821 10:33:20.449533   12471 out.go:97] [download-only-866840] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	W0821 10:33:20.449614   12471 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17102-5717/.minikube/cache/preloaded-tarball: no such file or directory
	I0821 10:33:20.451116   12471 out.go:169] MINIKUBE_LOCATION=17102
	I0821 10:33:20.449722   12471 notify.go:220] Checking for updates...
	I0821 10:33:20.454000   12471 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 10:33:20.455579   12471 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17102-5717/kubeconfig
	I0821 10:33:20.457034   12471 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-5717/.minikube
	I0821 10:33:20.458769   12471 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0821 10:33:20.461454   12471 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0821 10:33:20.461664   12471 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 10:33:20.481123   12471 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0821 10:33:20.481185   12471 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 10:33:20.801331   12471 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:43 SystemTime:2023-08-21 10:33:20.793620852 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0821 10:33:20.801426   12471 docker.go:294] overlay module found
	I0821 10:33:20.803329   12471 out.go:97] Using the docker driver based on user configuration
	I0821 10:33:20.803376   12471 start.go:298] selected driver: docker
	I0821 10:33:20.803393   12471 start.go:902] validating driver "docker" against <nil>
	I0821 10:33:20.803476   12471 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 10:33:20.856640   12471 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:43 SystemTime:2023-08-21 10:33:20.848705175 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0821 10:33:20.856831   12471 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0821 10:33:20.857495   12471 start_flags.go:382] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0821 10:33:20.857689   12471 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0821 10:33:20.859793   12471 out.go:169] Using Docker driver with root privileges
	I0821 10:33:20.861378   12471 cni.go:84] Creating CNI manager for ""
	I0821 10:33:20.861392   12471 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0821 10:33:20.861400   12471 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0821 10:33:20.861413   12471 start_flags.go:319] config:
	{Name:download-only-866840 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-866840 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 10:33:20.862839   12471 out.go:97] Starting control plane node download-only-866840 in cluster download-only-866840
	I0821 10:33:20.862851   12471 cache.go:122] Beginning downloading kic base image for docker with crio
	I0821 10:33:20.864120   12471 out.go:97] Pulling base image ...
	I0821 10:33:20.864142   12471 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0821 10:33:20.864269   12471 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0821 10:33:20.880493   12471 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0821 10:33:20.880636   12471 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0821 10:33:20.880713   12471 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0821 10:33:20.897207   12471 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0821 10:33:20.897228   12471 cache.go:57] Caching tarball of preloaded images
	I0821 10:33:20.897357   12471 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0821 10:33:20.900305   12471 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0821 10:33:20.900317   12471 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0821 10:33:20.935512   12471 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17102-5717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0821 10:33:24.171042   12471 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	I0821 10:33:24.244262   12471 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0821 10:33:24.244389   12471 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17102-5717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0821 10:33:25.103585   12471 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0821 10:33:25.103914   12471 profile.go:148] Saving config to /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/download-only-866840/config.json ...
	I0821 10:33:25.103944   12471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/download-only-866840/config.json: {Name:mka3e467d6f8640c4567787659332c228d209d3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 10:33:25.104108   12471 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0821 10:33:25.104275   12471 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17102-5717/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-866840"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/json-events (8.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-866840 --force --alsologtostderr --kubernetes-version=v1.27.4 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-866840 --force --alsologtostderr --kubernetes-version=v1.27.4 --container-runtime=crio --driver=docker  --container-runtime=crio: (8.343275867s)
--- PASS: TestDownloadOnly/v1.27.4/json-events (8.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/preload-exists
--- PASS: TestDownloadOnly/v1.27.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-866840
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-866840: exit status 85 (57.03341ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-866840 | jenkins | v1.31.2 | 21 Aug 23 10:33 UTC |          |
	|         | -p download-only-866840        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-866840 | jenkins | v1.31.2 | 21 Aug 23 10:33 UTC |          |
	|         | -p download-only-866840        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/21 10:33:28
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0821 10:33:28.280243   12623 out.go:296] Setting OutFile to fd 1 ...
	I0821 10:33:28.280355   12623 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 10:33:28.280363   12623 out.go:309] Setting ErrFile to fd 2...
	I0821 10:33:28.280367   12623 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 10:33:28.280580   12623 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-5717/.minikube/bin
	W0821 10:33:28.280682   12623 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17102-5717/.minikube/config/config.json: open /home/jenkins/minikube-integration/17102-5717/.minikube/config/config.json: no such file or directory
	I0821 10:33:28.281086   12623 out.go:303] Setting JSON to true
	I0821 10:33:28.281875   12623 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":958,"bootTime":1692613050,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0821 10:33:28.281942   12623 start.go:138] virtualization: kvm guest
	I0821 10:33:28.284371   12623 out.go:97] [download-only-866840] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0821 10:33:28.285949   12623 out.go:169] MINIKUBE_LOCATION=17102
	I0821 10:33:28.284507   12623 notify.go:220] Checking for updates...
	I0821 10:33:28.288496   12623 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 10:33:28.289803   12623 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17102-5717/kubeconfig
	I0821 10:33:28.291200   12623 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-5717/.minikube
	I0821 10:33:28.292660   12623 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0821 10:33:28.295926   12623 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0821 10:33:28.296315   12623 config.go:182] Loaded profile config "download-only-866840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0821 10:33:28.296354   12623 start.go:810] api.Load failed for download-only-866840: filestore "download-only-866840": Docker machine "download-only-866840" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0821 10:33:28.296425   12623 driver.go:373] Setting default libvirt URI to qemu:///system
	W0821 10:33:28.296458   12623 start.go:810] api.Load failed for download-only-866840: filestore "download-only-866840": Docker machine "download-only-866840" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0821 10:33:28.318067   12623 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0821 10:33:28.318160   12623 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 10:33:28.371518   12623 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-08-21 10:33:28.363091129 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0821 10:33:28.371606   12623 docker.go:294] overlay module found
	I0821 10:33:28.373385   12623 out.go:97] Using the docker driver based on existing profile
	I0821 10:33:28.373402   12623 start.go:298] selected driver: docker
	I0821 10:33:28.373407   12623 start.go:902] validating driver "docker" against &{Name:download-only-866840 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-866840 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0}
	I0821 10:33:28.373517   12623 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 10:33:28.421087   12623 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-08-21 10:33:28.41336181 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0821 10:33:28.421683   12623 cni.go:84] Creating CNI manager for ""
	I0821 10:33:28.421701   12623 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0821 10:33:28.421709   12623 start_flags.go:319] config:
	{Name:download-only-866840 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:download-only-866840 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 10:33:28.423553   12623 out.go:97] Starting control plane node download-only-866840 in cluster download-only-866840
	I0821 10:33:28.423568   12623 cache.go:122] Beginning downloading kic base image for docker with crio
	I0821 10:33:28.425020   12623 out.go:97] Pulling base image ...
	I0821 10:33:28.425042   12623 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0821 10:33:28.425146   12623 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0821 10:33:28.439454   12623 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0821 10:33:28.439553   12623 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0821 10:33:28.439567   12623 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory, skipping pull
	I0821 10:33:28.439570   12623 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in cache, skipping pull
	I0821 10:33:28.439576   12623 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	I0821 10:33:29.027801   12623 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.4/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4
	I0821 10:33:29.027842   12623 cache.go:57] Caching tarball of preloaded images
	I0821 10:33:29.028027   12623 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0821 10:33:29.030026   12623 out.go:97] Downloading Kubernetes v1.27.4 preload ...
	I0821 10:33:29.030048   12623 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 ...
	I0821 10:33:29.059964   12623 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.4/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:8fb3cf29e31ee2994fdad70ff1ffc061 -> /home/jenkins/minikube-integration/17102-5717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4
	I0821 10:33:32.539482   12623 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 ...
	I0821 10:33:32.539569   12623 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17102-5717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-866840"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.4/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.1/json-events (9.72s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-866840 --force --alsologtostderr --kubernetes-version=v1.28.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-866840 --force --alsologtostderr --kubernetes-version=v1.28.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.722686265s)
--- PASS: TestDownloadOnly/v1.28.0-rc.1/json-events (9.72s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.1/preload-exists
--- PASS: TestDownloadOnly/v1.28.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.1/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-866840
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-866840: exit status 85 (56.282043ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-866840 | jenkins | v1.31.2 | 21 Aug 23 10:33 UTC |          |
	|         | -p download-only-866840           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-866840 | jenkins | v1.31.2 | 21 Aug 23 10:33 UTC |          |
	|         | -p download-only-866840           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.4      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-866840 | jenkins | v1.31.2 | 21 Aug 23 10:33 UTC |          |
	|         | -p download-only-866840           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.0-rc.1 |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/21 10:33:36
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0821 10:33:36.681231   12767 out.go:296] Setting OutFile to fd 1 ...
	I0821 10:33:36.681328   12767 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 10:33:36.681335   12767 out.go:309] Setting ErrFile to fd 2...
	I0821 10:33:36.681339   12767 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 10:33:36.681536   12767 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-5717/.minikube/bin
	W0821 10:33:36.681667   12767 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17102-5717/.minikube/config/config.json: open /home/jenkins/minikube-integration/17102-5717/.minikube/config/config.json: no such file or directory
	I0821 10:33:36.682061   12767 out.go:303] Setting JSON to true
	I0821 10:33:36.682809   12767 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":967,"bootTime":1692613050,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0821 10:33:36.682858   12767 start.go:138] virtualization: kvm guest
	I0821 10:33:36.685345   12767 out.go:97] [download-only-866840] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0821 10:33:36.687051   12767 out.go:169] MINIKUBE_LOCATION=17102
	I0821 10:33:36.685487   12767 notify.go:220] Checking for updates...
	I0821 10:33:36.690035   12767 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 10:33:36.691768   12767 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17102-5717/kubeconfig
	I0821 10:33:36.693624   12767 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-5717/.minikube
	I0821 10:33:36.695102   12767 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0821 10:33:36.697906   12767 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0821 10:33:36.698278   12767 config.go:182] Loaded profile config "download-only-866840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	W0821 10:33:36.698310   12767 start.go:810] api.Load failed for download-only-866840: filestore "download-only-866840": Docker machine "download-only-866840" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0821 10:33:36.698383   12767 driver.go:373] Setting default libvirt URI to qemu:///system
	W0821 10:33:36.698412   12767 start.go:810] api.Load failed for download-only-866840: filestore "download-only-866840": Docker machine "download-only-866840" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0821 10:33:36.718031   12767 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0821 10:33:36.718123   12767 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 10:33:36.771928   12767 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-08-21 10:33:36.764309848 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0821 10:33:36.772032   12767 docker.go:294] overlay module found
	I0821 10:33:36.773932   12767 out.go:97] Using the docker driver based on existing profile
	I0821 10:33:36.773953   12767 start.go:298] selected driver: docker
	I0821 10:33:36.773958   12767 start.go:902] validating driver "docker" against &{Name:download-only-866840 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:download-only-866840 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0}
	I0821 10:33:36.774088   12767 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 10:33:36.822832   12767 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-08-21 10:33:36.815443794 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0821 10:33:36.823514   12767 cni.go:84] Creating CNI manager for ""
	I0821 10:33:36.823537   12767 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0821 10:33:36.823547   12767 start_flags.go:319] config:
	{Name:download-only-866840 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.1 ClusterName:download-only-866840 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocke
t: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 10:33:36.825466   12767 out.go:97] Starting control plane node download-only-866840 in cluster download-only-866840
	I0821 10:33:36.825489   12767 cache.go:122] Beginning downloading kic base image for docker with crio
	I0821 10:33:36.827093   12767 out.go:97] Pulling base image ...
	I0821 10:33:36.827118   12767 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.1 and runtime crio
	I0821 10:33:36.827238   12767 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0821 10:33:36.842531   12767 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0821 10:33:36.842630   12767 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0821 10:33:36.842644   12767 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory, skipping pull
	I0821 10:33:36.842648   12767 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in cache, skipping pull
	I0821 10:33:36.842654   12767 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	I0821 10:33:36.860989   12767 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0-rc.1/preloaded-images-k8s-v18-v1.28.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I0821 10:33:36.861018   12767 cache.go:57] Caching tarball of preloaded images
	I0821 10:33:36.861143   12767 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.1 and runtime crio
	I0821 10:33:36.863112   12767 out.go:97] Downloading Kubernetes v1.28.0-rc.1 preload ...
	I0821 10:33:36.863128   12767 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.0-rc.1-cri-o-overlay-amd64.tar.lz4 ...
	I0821 10:33:36.889761   12767 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0-rc.1/preloaded-images-k8s-v18-v1.28.0-rc.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:bb8ba69c7dfa450cc0765c8991e48fa2 -> /home/jenkins/minikube-integration/17102-5717/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.1-cri-o-overlay-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-866840"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0-rc.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-866840
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.15s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-557830 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-557830" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-557830
--- PASS: TestDownloadOnlyKic (1.15s)

                                                
                                    
x
+
TestBinaryMirror (0.68s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-393575 --alsologtostderr --binary-mirror http://127.0.0.1:36497 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-393575" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-393575
--- PASS: TestBinaryMirror (0.68s)

                                                
                                    
x
+
TestOffline (80.63s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-575709 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-575709 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (1m18.351029978s)
helpers_test.go:175: Cleaning up "offline-crio-575709" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-575709
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-575709: (2.276248773s)
--- PASS: TestOffline (80.63s)

                                                
                                    
x
+
TestAddons/Setup (124.54s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-351207 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-351207 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m4.542126602s)
--- PASS: TestAddons/Setup (124.54s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 12.321139ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-rszhp" [049e7ace-e189-472f-a06b-acb5921f60c6] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.014976847s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-srz8z" [f70199f4-b53c-4224-9c07-94522d99ba02] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.011093028s
addons_test.go:316: (dbg) Run:  kubectl --context addons-351207 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-351207 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-351207 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.050495372s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-amd64 -p addons-351207 ip
2023/08/21 10:36:06 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p addons-351207 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.87s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.81s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-stfk6" [0b398fe7-d080-4a07-bc4b-4de99116aeaf] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.059193536s
addons_test.go:817: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-351207
addons_test.go:817: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-351207: (5.75366365s)
--- PASS: TestAddons/parallel/InspektorGadget (10.81s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.11s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 3.196502ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7746886d4f-xl26c" [cdb34a8d-7efb-49db-88d0-c69ba81643b8] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.012045382s
addons_test.go:391: (dbg) Run:  kubectl --context addons-351207 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p addons-351207 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p addons-351207 addons disable metrics-server --alsologtostderr -v=1: (1.03575457s)
--- PASS: TestAddons/parallel/MetricsServer (6.11s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.77s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 3.225062ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6847666dc-87cll" [c5ec8a2f-92c6-4d65-8462-c018b253cf0d] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.01170662s
addons_test.go:449: (dbg) Run:  kubectl --context addons-351207 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-351207 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.268648097s)
addons_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p addons-351207 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.77s)

                                                
                                    
x
+
TestAddons/parallel/CSI (77.71s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 13.62391ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-351207 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-351207 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [88b86d24-0b67-4cef-972e-ace9292a57cb] Pending
helpers_test.go:344: "task-pv-pod" [88b86d24-0b67-4cef-972e-ace9292a57cb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [88b86d24-0b67-4cef-972e-ace9292a57cb] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.009048828s
addons_test.go:560: (dbg) Run:  kubectl --context addons-351207 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-351207 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-351207 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-351207 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-351207 delete pod task-pv-pod
addons_test.go:576: (dbg) Run:  kubectl --context addons-351207 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-351207 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351207 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-351207 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [800a6f3a-4aa4-4e03-803d-002965056566] Pending
helpers_test.go:344: "task-pv-pod-restore" [800a6f3a-4aa4-4e03-803d-002965056566] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [800a6f3a-4aa4-4e03-803d-002965056566] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.010531709s
addons_test.go:602: (dbg) Run:  kubectl --context addons-351207 delete pod task-pv-pod-restore
addons_test.go:602: (dbg) Done: kubectl --context addons-351207 delete pod task-pv-pod-restore: (1.020860645s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-351207 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-351207 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-amd64 -p addons-351207 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-amd64 -p addons-351207 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.538119906s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-amd64 -p addons-351207 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (77.71s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-351207 --alsologtostderr -v=1
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5c78f74d8d-9h9n5" [2ad5b1a5-c120-4517-b2e8-0aad4c648c4e] Pending
helpers_test.go:344: "headlamp-5c78f74d8d-9h9n5" [2ad5b1a5-c120-4517-b2e8-0aad4c648c4e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5c78f74d8d-9h9n5" [2ad5b1a5-c120-4517-b2e8-0aad4c648c4e] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.03094764s
--- PASS: TestAddons/parallel/Headlamp (13.03s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.85s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-d67854dc9-n4dqk" [01f55465-9ad2-43b6-85c1-63a39b5e9a44] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.01363865s
addons_test.go:836: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-351207
--- PASS: TestAddons/parallel/CloudSpanner (5.85s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-351207 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-351207 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.13s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-351207
addons_test.go:148: (dbg) Done: out/minikube-linux-amd64 stop -p addons-351207: (11.908498676s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-351207
addons_test.go:156: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-351207
addons_test.go:161: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-351207
--- PASS: TestAddons/StoppedEnableDisable (12.13s)

                                                
                                    
x
+
TestCertOptions (29.43s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-400386 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-400386 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (26.335893073s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-400386 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-400386 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-400386 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-400386" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-400386
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-400386: (2.405465819s)
--- PASS: TestCertOptions (29.43s)

                                                
                                    
x
+
TestCertExpiration (234.91s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-650157 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-650157 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (26.33610083s)
E0821 11:05:53.215754   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/client.crt: no such file or directory
E0821 11:05:56.433420   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-650157 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-650157 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (26.620040969s)
helpers_test.go:175: Cleaning up "cert-expiration-650157" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-650157
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-650157: (1.955249038s)
--- PASS: TestCertExpiration (234.91s)

                                                
                                    
x
+
TestForceSystemdFlag (33.86s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-577578 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-577578 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (28.637935755s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-577578 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-577578" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-577578
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-577578: (4.933316675s)
--- PASS: TestForceSystemdFlag (33.86s)

                                                
                                    
x
+
TestForceSystemdEnv (29.39s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-121880 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-121880 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (27.010631368s)
helpers_test.go:175: Cleaning up "force-systemd-env-121880" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-121880
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-121880: (2.380740959s)
--- PASS: TestForceSystemdEnv (29.39s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.3s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.30s)

                                                
                                    
x
+
TestErrorSpam/setup (23.07s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-665534 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-665534 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-665534 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-665534 --driver=docker  --container-runtime=crio: (23.073925074s)
--- PASS: TestErrorSpam/setup (23.07s)

                                                
                                    
x
+
TestErrorSpam/start (0.57s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-665534 --log_dir /tmp/nospam-665534 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-665534 --log_dir /tmp/nospam-665534 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-665534 --log_dir /tmp/nospam-665534 start --dry-run
--- PASS: TestErrorSpam/start (0.57s)

                                                
                                    
x
+
TestErrorSpam/status (0.82s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-665534 --log_dir /tmp/nospam-665534 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-665534 --log_dir /tmp/nospam-665534 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-665534 --log_dir /tmp/nospam-665534 status
--- PASS: TestErrorSpam/status (0.82s)

                                                
                                    
x
+
TestErrorSpam/pause (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-665534 --log_dir /tmp/nospam-665534 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-665534 --log_dir /tmp/nospam-665534 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-665534 --log_dir /tmp/nospam-665534 pause
--- PASS: TestErrorSpam/pause (1.44s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-665534 --log_dir /tmp/nospam-665534 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-665534 --log_dir /tmp/nospam-665534 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-665534 --log_dir /tmp/nospam-665534 unpause
--- PASS: TestErrorSpam/unpause (1.43s)

                                                
                                    
x
+
TestErrorSpam/stop (1.35s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-665534 --log_dir /tmp/nospam-665534 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-665534 --log_dir /tmp/nospam-665534 stop: (1.181205778s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-665534 --log_dir /tmp/nospam-665534 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-665534 --log_dir /tmp/nospam-665534 stop
--- PASS: TestErrorSpam/stop (1.35s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17102-5717/.minikube/files/etc/test/nested/copy/12460/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (66.68s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-923429 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0821 10:40:53.215742   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/client.crt: no such file or directory
E0821 10:40:53.221479   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/client.crt: no such file or directory
E0821 10:40:53.231784   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/client.crt: no such file or directory
E0821 10:40:53.252071   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/client.crt: no such file or directory
E0821 10:40:53.292380   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/client.crt: no such file or directory
E0821 10:40:53.372681   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/client.crt: no such file or directory
E0821 10:40:53.533078   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/client.crt: no such file or directory
E0821 10:40:53.853643   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-923429 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m6.675936274s)
--- PASS: TestFunctional/serial/StartWithProxy (66.68s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (44.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-923429 --alsologtostderr -v=8
E0821 10:40:54.494092   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/client.crt: no such file or directory
E0821 10:40:55.775120   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/client.crt: no such file or directory
E0821 10:40:58.335926   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/client.crt: no such file or directory
E0821 10:41:03.456648   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/client.crt: no such file or directory
E0821 10:41:13.696897   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/client.crt: no such file or directory
E0821 10:41:34.177728   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-923429 --alsologtostderr -v=8: (44.257046234s)
functional_test.go:659: soft start took 44.257754453s for "functional-923429" cluster.
--- PASS: TestFunctional/serial/SoftStart (44.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-923429 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-923429 /tmp/TestFunctionalserialCacheCmdcacheadd_local1181636685/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 cache add minikube-local-cache-test:functional-923429
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 cache delete minikube-local-cache-test:functional-923429
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-923429
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-923429 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (248.373146ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 kubectl -- --context functional-923429 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-923429 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (27.73s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-923429 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-923429 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (27.729044855s)
functional_test.go:757: restart took 27.729172794s for "functional-923429" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (27.73s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-923429 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-923429 logs: (1.285322553s)
--- PASS: TestFunctional/serial/LogsCmd (1.29s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 logs --file /tmp/TestFunctionalserialLogsFileCmd3962415665/001/logs.txt
E0821 10:42:15.138948   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/client.crt: no such file or directory
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-923429 logs --file /tmp/TestFunctionalserialLogsFileCmd3962415665/001/logs.txt: (1.298485149s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.30s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.63s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-923429 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-923429
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-923429: exit status 115 (308.117596ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30635 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-923429 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-923429 delete -f testdata/invalidsvc.yaml: (1.032748848s)
--- PASS: TestFunctional/serial/InvalidService (4.63s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-923429 config get cpus: exit status 14 (45.453563ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-923429 config get cpus: exit status 14 (51.990063ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (21.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-923429 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-923429 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 46047: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (21.65s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-923429 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-923429 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (231.262448ms)

                                                
                                                
-- stdout --
	* [functional-923429] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17102
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17102-5717/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-5717/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 10:42:24.867912   45644 out.go:296] Setting OutFile to fd 1 ...
	I0821 10:42:24.868055   45644 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 10:42:24.868061   45644 out.go:309] Setting ErrFile to fd 2...
	I0821 10:42:24.868068   45644 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 10:42:24.868365   45644 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-5717/.minikube/bin
	I0821 10:42:24.869021   45644 out.go:303] Setting JSON to false
	I0821 10:42:24.870391   45644 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1495,"bootTime":1692613050,"procs":406,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0821 10:42:24.870457   45644 start.go:138] virtualization: kvm guest
	I0821 10:42:24.872508   45644 out.go:177] * [functional-923429] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0821 10:42:24.874363   45644 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 10:42:24.874335   45644 notify.go:220] Checking for updates...
	I0821 10:42:24.875880   45644 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 10:42:24.936939   45644 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17102-5717/kubeconfig
	I0821 10:42:24.938552   45644 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-5717/.minikube
	I0821 10:42:24.940089   45644 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0821 10:42:24.941689   45644 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 10:42:24.943735   45644 config.go:182] Loaded profile config "functional-923429": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0821 10:42:24.944324   45644 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 10:42:24.973844   45644 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0821 10:42:24.973939   45644 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 10:42:25.028930   45644 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:49 SystemTime:2023-08-21 10:42:25.020892009 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0821 10:42:25.029035   45644 docker.go:294] overlay module found
	I0821 10:42:25.038237   45644 out.go:177] * Using the docker driver based on existing profile
	I0821 10:42:25.039642   45644 start.go:298] selected driver: docker
	I0821 10:42:25.039661   45644 start.go:902] validating driver "docker" against &{Name:functional-923429 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:functional-923429 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 10:42:25.039796   45644 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 10:42:25.042421   45644 out.go:177] 
	W0821 10:42:25.044017   45644 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0821 10:42:25.045348   45644 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-923429 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-923429 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-923429 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (165.109046ms)

                                                
                                                
-- stdout --
	* [functional-923429] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17102
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17102-5717/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-5717/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 10:42:24.698316   45570 out.go:296] Setting OutFile to fd 1 ...
	I0821 10:42:24.698472   45570 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 10:42:24.698480   45570 out.go:309] Setting ErrFile to fd 2...
	I0821 10:42:24.698485   45570 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 10:42:24.698747   45570 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-5717/.minikube/bin
	I0821 10:42:24.699276   45570 out.go:303] Setting JSON to false
	I0821 10:42:24.700378   45570 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1495,"bootTime":1692613050,"procs":407,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0821 10:42:24.700446   45570 start.go:138] virtualization: kvm guest
	I0821 10:42:24.703432   45570 out.go:177] * [functional-923429] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	I0821 10:42:24.705185   45570 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 10:42:24.705237   45570 notify.go:220] Checking for updates...
	I0821 10:42:24.708066   45570 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 10:42:24.710122   45570 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17102-5717/kubeconfig
	I0821 10:42:24.711490   45570 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-5717/.minikube
	I0821 10:42:24.712821   45570 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0821 10:42:24.714125   45570 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 10:42:24.715774   45570 config.go:182] Loaded profile config "functional-923429": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0821 10:42:24.716192   45570 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 10:42:24.742103   45570 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0821 10:42:24.742199   45570 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 10:42:24.807176   45570 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:49 SystemTime:2023-08-21 10:42:24.798962274 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0821 10:42:24.807304   45570 docker.go:294] overlay module found
	I0821 10:42:24.809092   45570 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0821 10:42:24.810497   45570 start.go:298] selected driver: docker
	I0821 10:42:24.810512   45570 start.go:902] validating driver "docker" against &{Name:functional-923429 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:functional-923429 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 10:42:24.810617   45570 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 10:42:24.812705   45570 out.go:177] 
	W0821 10:42:24.814025   45570 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0821 10:42:24.815474   45570 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-923429 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-923429 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6fb669fc84-vhxgx" [6fe51a88-e2b2-4b31-b7da-01247ba32b7b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-6fb669fc84-vhxgx" [6fe51a88-e2b2-4b31-b7da-01247ba32b7b] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.011078325s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:31230
functional_test.go:1674: http://192.168.49.2:31230: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6fb669fc84-vhxgx

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31230
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.74s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [d0745dcd-b47d-49a5-83ce-c1d42f61dfef] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.044722973s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-923429 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-923429 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-923429 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-923429 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [61e1623b-aa83-4711-93c6-2c47fd3ff405] Pending
helpers_test.go:344: "sp-pod" [61e1623b-aa83-4711-93c6-2c47fd3ff405] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [61e1623b-aa83-4711-93c6-2c47fd3ff405] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.010408583s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-923429 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-923429 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-923429 delete -f testdata/storage-provisioner/pod.yaml: (1.163610197s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-923429 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e195392b-6a61-4790-8f92-03600a78f56f] Pending
helpers_test.go:344: "sp-pod" [e195392b-6a61-4790-8f92-03600a78f56f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e195392b-6a61-4790-8f92-03600a78f56f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.009805244s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-923429 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.35s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 ssh -n functional-923429 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 cp functional-923429:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3068536443/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 ssh -n functional-923429 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-923429 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-7db894d786-6lkws" [dd5f20b9-4d32-4566-a79b-300e249f3c2f] Pending
helpers_test.go:344: "mysql-7db894d786-6lkws" [dd5f20b9-4d32-4566-a79b-300e249f3c2f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-7db894d786-6lkws" [dd5f20b9-4d32-4566-a79b-300e249f3c2f] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.029345985s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-923429 exec mysql-7db894d786-6lkws -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-923429 exec mysql-7db894d786-6lkws -- mysql -ppassword -e "show databases;": exit status 1 (198.07394ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-923429 exec mysql-7db894d786-6lkws -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-923429 exec mysql-7db894d786-6lkws -- mysql -ppassword -e "show databases;": exit status 1 (197.220035ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-923429 exec mysql-7db894d786-6lkws -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.76s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/12460/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 ssh "sudo cat /etc/test/nested/copy/12460/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/12460.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 ssh "sudo cat /etc/ssl/certs/12460.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/12460.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 ssh "sudo cat /usr/share/ca-certificates/12460.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/124602.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 ssh "sudo cat /etc/ssl/certs/124602.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/124602.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 ssh "sudo cat /usr/share/ca-certificates/124602.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-923429 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-923429 ssh "sudo systemctl is-active docker": exit status 1 (269.101644ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-923429 ssh "sudo systemctl is-active containerd": exit status 1 (316.874723ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-923429 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.4
registry.k8s.io/kube-proxy:v1.27.4
registry.k8s.io/kube-controller-manager:v1.27.4
registry.k8s.io/kube-apiserver:v1.27.4
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-923429
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-923429 image ls --format short --alsologtostderr:
I0821 10:42:56.484690   50399 out.go:296] Setting OutFile to fd 1 ...
I0821 10:42:56.484796   50399 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0821 10:42:56.484808   50399 out.go:309] Setting ErrFile to fd 2...
I0821 10:42:56.484812   50399 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0821 10:42:56.485008   50399 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-5717/.minikube/bin
I0821 10:42:56.485545   50399 config.go:182] Loaded profile config "functional-923429": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0821 10:42:56.485633   50399 config.go:182] Loaded profile config "functional-923429": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0821 10:42:56.486009   50399 cli_runner.go:164] Run: docker container inspect functional-923429 --format={{.State.Status}}
I0821 10:42:56.502207   50399 ssh_runner.go:195] Run: systemctl --version
I0821 10:42:56.502262   50399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-923429
I0821 10:42:56.517435   50399 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/functional-923429/id_rsa Username:docker}
I0821 10:42:56.607651   50399 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-923429 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/mysql                 | 5.7                | 92034fe9a41f4 | 601MB  |
| gcr.io/google-containers/addon-resizer  | functional-923429  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-scheduler          | v1.27.4            | 98ef2570f3cde | 59.8MB |
| docker.io/kindest/kindnetd              | v20230511-dc714da8 | b0b1fa0f58c6e | 65.2MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/kube-proxy              | v1.27.4            | 6848d7eda0341 | 72.7MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.27.4            | e7972205b6614 | 122MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/library/nginx                 | alpine             | eaf194063ee28 | 44.4MB |
| docker.io/library/nginx                 | latest             | eea7b3dcba7ee | 191MB  |
| registry.k8s.io/etcd                    | 3.5.7-0            | 86b6af7dd652c | 297MB  |
| registry.k8s.io/kube-controller-manager | v1.27.4            | f466468864b7a | 114MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/my-image                      | functional-923429  | dc288fee40a69 | 1.47MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-923429 image ls --format table --alsologtostderr:
I0821 10:42:58.704140   51094 out.go:296] Setting OutFile to fd 1 ...
I0821 10:42:58.704238   51094 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0821 10:42:58.704245   51094 out.go:309] Setting ErrFile to fd 2...
I0821 10:42:58.704249   51094 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0821 10:42:58.704439   51094 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-5717/.minikube/bin
I0821 10:42:58.704966   51094 config.go:182] Loaded profile config "functional-923429": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0821 10:42:58.705059   51094 config.go:182] Loaded profile config "functional-923429": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0821 10:42:58.705416   51094 cli_runner.go:164] Run: docker container inspect functional-923429 --format={{.State.Status}}
I0821 10:42:58.723069   51094 ssh_runner.go:195] Run: systemctl --version
I0821 10:42:58.723128   51094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-923429
I0821 10:42:58.739549   51094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/functional-923429/id_rsa Username:docker}
I0821 10:42:58.827607   51094 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-923429 image ls --format json --alsologtostderr:
[{"id":"a1c871b2039a57a3a48f27338834e347e08a8b7a5f1bd8eb20ec0d7e7370d58c","repoDigests":["docker.io/library/a788fb1478066328110ee6762fc939bd40a6205f4c661e09c7676f77095eb991-tmp@sha256:23c32a4a1a660224dbba88856455eded38165e3b6e0c450e38eca35b9a74c177"],"repoTags":[],"size":"1465611"},{"id":"eea7b3dcba7ee47c0d16a60cc85d2b977d166be3960541991f3e6294d795ed24","repoDigests":["docker.io/library/nginx@sha256:104c7c5c54f2685f0f46f3be607ce60da7085da3eaa5ad22d3d9f01594295e9c","docker.io/library/nginx@sha256:48a84a0728cab8ac558f48796f901f6d31d287101bc8b317683678125e0d2d35"],"repoTags":["docker.io/library/nginx:latest"],"size":"190820092"},{"id":"dc288fee40a69ed8bc9c1bc112e4de1be83ca2c90a86c284d6dcea6e369661bb","repoDigests":["localhost/my-image@sha256:932bddae222440e0858c415510b5ff7f6a17ef37a522203439ae0ad8e97c969c"],"repoTags":["localhost/my-image:functional-923429"],"size":"1468194"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b28338
8d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da","repoDigests":["docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974","docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"65249302"},{"id":"92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d","repoDigests":["docker.io/library/mysql@sha256:2c23f254c6b9444ecda9ba36051a9800e8934a2f5828ecc8730531db8142af83","docker.io/library/mysql@sha256:aaa1374f1e6c24d73e9dfa8f2cdae81c8054e6d1d80c32da883a9050258b6e83"],"repoTags":["docker.io/library/mysql:5.7"],"size":"601277093"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/
google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-923429"],"size":"34114467"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4","repoDigests":["registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf","registry.k8s.io/kube-proxy@sha256:ce9abe867450f8962eb851670b5869219ca0c3376777d1e18d89f9abedbe10c3"],"repoTags":["registry.k8s.io/kube-proxy:v1.27.4"],"size":"72714135"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.i
o/pause:3.3"],"size":"686139"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c","repoDigests":["registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d","registry.k8s.io/kube-apiserver@sha256:dcf39b4579f896291ec79bb2ef94ad2b51e2ad1846086df705b06dc3ae20c854"],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.4"],"size":"122078160"},{"id":"98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16","repoDigests":["registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af","registry.k8s.io/kube-scheduler@sha256:9c58009453cfcd7533721327269
d2ef0af93d09f21812a5d584c375840117da7"],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.4"],"size":"59814710"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681","repoDigests":["registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83","registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9"],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"297083935"},{"id":"f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5","repoD
igests":["registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265","registry.k8s.io/kube-controller-manager@sha256:c4765f94930681526ac9179fc4e49b5254abcbfa33841af4602a52bc664f6934"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.4"],"size":"113931062"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"eaf194063ee
287f60137b88326ed4d3a14ec62f20de06df6ff7f8b5ed9f1d08c","repoDigests":["docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a","docker.io/library/nginx@sha256:cac882be2b7305e0c8d3e3cd0575a2fd58f5fde6dd5d6299605aa0f3e67ca385"],"repoTags":["docker.io/library/nginx:alpine"],"size":"44389671"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTag
s":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-923429 image ls --format json --alsologtostderr:
I0821 10:42:58.499072   51010 out.go:296] Setting OutFile to fd 1 ...
I0821 10:42:58.499200   51010 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0821 10:42:58.499209   51010 out.go:309] Setting ErrFile to fd 2...
I0821 10:42:58.499213   51010 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0821 10:42:58.499435   51010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-5717/.minikube/bin
I0821 10:42:58.500013   51010 config.go:182] Loaded profile config "functional-923429": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0821 10:42:58.500124   51010 config.go:182] Loaded profile config "functional-923429": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0821 10:42:58.500505   51010 cli_runner.go:164] Run: docker container inspect functional-923429 --format={{.State.Status}}
I0821 10:42:58.516786   51010 ssh_runner.go:195] Run: systemctl --version
I0821 10:42:58.516831   51010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-923429
I0821 10:42:58.533284   51010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/functional-923429/id_rsa Username:docker}
I0821 10:42:58.623118   51010 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-923429 image ls --format yaml --alsologtostderr:
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-923429
size: "34114467"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: f466468864b7a960b22d9bc40e713c0dfc86d4544b1d1460ea6f120f13f286a5
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265
- registry.k8s.io/kube-controller-manager@sha256:c4765f94930681526ac9179fc4e49b5254abcbfa33841af4602a52bc664f6934
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.4
size: "113931062"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: eaf194063ee287f60137b88326ed4d3a14ec62f20de06df6ff7f8b5ed9f1d08c
repoDigests:
- docker.io/library/nginx@sha256:9d749f4c66e206398d0c2d667b2b14c201cc4fd089419245c14a031b9368de3a
- docker.io/library/nginx@sha256:cac882be2b7305e0c8d3e3cd0575a2fd58f5fde6dd5d6299605aa0f3e67ca385
repoTags:
- docker.io/library/nginx:alpine
size: "44389671"
- id: 86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681
repoDigests:
- registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83
- registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "297083935"
- id: 6848d7eda0341fb6b336415706f630eb2f24e9569d581c63ab6f6a1d21654ce4
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf
- registry.k8s.io/kube-proxy@sha256:ce9abe867450f8962eb851670b5869219ca0c3376777d1e18d89f9abedbe10c3
repoTags:
- registry.k8s.io/kube-proxy:v1.27.4
size: "72714135"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da
repoDigests:
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
- docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "65249302"
- id: eea7b3dcba7ee47c0d16a60cc85d2b977d166be3960541991f3e6294d795ed24
repoDigests:
- docker.io/library/nginx@sha256:104c7c5c54f2685f0f46f3be607ce60da7085da3eaa5ad22d3d9f01594295e9c
- docker.io/library/nginx@sha256:48a84a0728cab8ac558f48796f901f6d31d287101bc8b317683678125e0d2d35
repoTags:
- docker.io/library/nginx:latest
size: "190820092"
- id: 98ef2570f3cde33e2d94e0d55c7f1345a0e9ab8d76faa14a24693f5ee1872f16
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af
- registry.k8s.io/kube-scheduler@sha256:9c58009453cfcd7533721327269d2ef0af93d09f21812a5d584c375840117da7
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.4
size: "59814710"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d
repoDigests:
- docker.io/library/mysql@sha256:2c23f254c6b9444ecda9ba36051a9800e8934a2f5828ecc8730531db8142af83
- docker.io/library/mysql@sha256:aaa1374f1e6c24d73e9dfa8f2cdae81c8054e6d1d80c32da883a9050258b6e83
repoTags:
- docker.io/library/mysql:5.7
size: "601277093"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: e7972205b6614ada77fb47d36d47b3cbed594932415d0d0deac8eec83111884c
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d
- registry.k8s.io/kube-apiserver@sha256:dcf39b4579f896291ec79bb2ef94ad2b51e2ad1846086df705b06dc3ae20c854
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.4
size: "122078160"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-923429 image ls --format yaml --alsologtostderr:
I0821 10:42:56.685133   50442 out.go:296] Setting OutFile to fd 1 ...
I0821 10:42:56.685247   50442 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0821 10:42:56.685255   50442 out.go:309] Setting ErrFile to fd 2...
I0821 10:42:56.685260   50442 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0821 10:42:56.685455   50442 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-5717/.minikube/bin
I0821 10:42:56.685978   50442 config.go:182] Loaded profile config "functional-923429": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0821 10:42:56.686067   50442 config.go:182] Loaded profile config "functional-923429": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0821 10:42:56.686426   50442 cli_runner.go:164] Run: docker container inspect functional-923429 --format={{.State.Status}}
I0821 10:42:56.702396   50442 ssh_runner.go:195] Run: systemctl --version
I0821 10:42:56.702445   50442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-923429
I0821 10:42:56.718296   50442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/functional-923429/id_rsa Username:docker}
I0821 10:42:56.807531   50442 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-923429 ssh pgrep buildkitd: exit status 1 (233.082005ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 image build -t localhost/my-image:functional-923429 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-923429 image build -t localhost/my-image:functional-923429 testdata/build --alsologtostderr: (1.163136005s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-923429 image build -t localhost/my-image:functional-923429 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> a1c871b2039
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-923429
--> dc288fee40a
Successfully tagged localhost/my-image:functional-923429
dc288fee40a69ed8bc9c1bc112e4de1be83ca2c90a86c284d6dcea6e369661bb
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-923429 image build -t localhost/my-image:functional-923429 testdata/build --alsologtostderr:
I0821 10:42:57.116365   50571 out.go:296] Setting OutFile to fd 1 ...
I0821 10:42:57.116521   50571 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0821 10:42:57.116530   50571 out.go:309] Setting ErrFile to fd 2...
I0821 10:42:57.116535   50571 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0821 10:42:57.116743   50571 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-5717/.minikube/bin
I0821 10:42:57.117304   50571 config.go:182] Loaded profile config "functional-923429": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0821 10:42:57.117867   50571 config.go:182] Loaded profile config "functional-923429": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0821 10:42:57.118245   50571 cli_runner.go:164] Run: docker container inspect functional-923429 --format={{.State.Status}}
I0821 10:42:57.134445   50571 ssh_runner.go:195] Run: systemctl --version
I0821 10:42:57.134495   50571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-923429
I0821 10:42:57.150279   50571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/functional-923429/id_rsa Username:docker}
I0821 10:42:57.235461   50571 build_images.go:151] Building image from path: /tmp/build.1271314120.tar
I0821 10:42:57.235527   50571 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0821 10:42:57.243102   50571 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1271314120.tar
I0821 10:42:57.245899   50571 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1271314120.tar: stat -c "%s %y" /var/lib/minikube/build/build.1271314120.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1271314120.tar': No such file or directory
I0821 10:42:57.245922   50571 ssh_runner.go:362] scp /tmp/build.1271314120.tar --> /var/lib/minikube/build/build.1271314120.tar (3072 bytes)
I0821 10:42:57.265825   50571 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1271314120
I0821 10:42:57.273522   50571 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1271314120 -xf /var/lib/minikube/build/build.1271314120.tar
I0821 10:42:57.281256   50571 crio.go:297] Building image: /var/lib/minikube/build/build.1271314120
I0821 10:42:57.281320   50571 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-923429 /var/lib/minikube/build/build.1271314120 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0821 10:42:58.221562   50571 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1271314120
I0821 10:42:58.229508   50571 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1271314120.tar
I0821 10:42:58.236894   50571 build_images.go:207] Built localhost/my-image:functional-923429 from /tmp/build.1271314120.tar
I0821 10:42:58.236924   50571 build_images.go:123] succeeded building to: functional-923429
I0821 10:42:58.236929   50571 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-923429
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (17.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-923429 /tmp/TestFunctionalparallelMountCmdany-port1742486508/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1692614541906011919" to /tmp/TestFunctionalparallelMountCmdany-port1742486508/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1692614541906011919" to /tmp/TestFunctionalparallelMountCmdany-port1742486508/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1692614541906011919" to /tmp/TestFunctionalparallelMountCmdany-port1742486508/001/test-1692614541906011919
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-923429 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (281.985864ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 21 10:42 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 21 10:42 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 21 10:42 test-1692614541906011919
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 ssh cat /mount-9p/test-1692614541906011919
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-923429 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d1d05195-217b-4c28-8321-7d5521bf9dc9] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [d1d05195-217b-4c28-8321-7d5521bf9dc9] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d1d05195-217b-4c28-8321-7d5521bf9dc9] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 13.097506507s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-923429 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-923429 /tmp/TestFunctionalparallelMountCmdany-port1742486508/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (17.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "301.455039ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "50.866099ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "263.307088ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "66.751607ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 image load --daemon gcr.io/google-containers/addon-resizer:functional-923429 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-923429 image load --daemon gcr.io/google-containers/addon-resizer:functional-923429 --alsologtostderr: (4.778076442s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-923429
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 image load --daemon gcr.io/google-containers/addon-resizer:functional-923429 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-923429 image load --daemon gcr.io/google-containers/addon-resizer:functional-923429 --alsologtostderr: (5.838181212s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.80s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-923429 /tmp/TestFunctionalparallelMountCmdspecific-port865303904/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-923429 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (266.075648ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-923429 /tmp/TestFunctionalparallelMountCmdspecific-port865303904/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-923429 ssh "sudo umount -f /mount-9p": exit status 1 (292.388829ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-923429 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-923429 /tmp/TestFunctionalparallelMountCmdspecific-port865303904/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-923429 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1518183147/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-923429 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1518183147/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-923429 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1518183147/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-923429 ssh "findmnt -T" /mount1: exit status 1 (441.866404ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-923429 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-923429 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1518183147/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-923429 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1518183147/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-923429 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1518183147/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 image save gcr.io/google-containers/addon-resizer:functional-923429 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-923429 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-923429 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-923429 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 48849: os: process already finished
helpers_test.go:502: unable to terminate pid 48662: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-923429 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-923429 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-923429 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [e4079796-2e6c-4ecb-82f0-8ec52013ab8a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [e4079796-2e6c-4ecb-82f0-8ec52013ab8a] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.01145779s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 image rm gcr.io/google-containers/addon-resizer:functional-923429 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-923429
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 image save --daemon gcr.io/google-containers/addon-resizer:functional-923429 --alsologtostderr
2023/08/21 10:42:46 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-923429 image save --daemon gcr.io/google-containers/addon-resizer:functional-923429 --alsologtostderr: (1.456494983s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-923429
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-923429 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-923429 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-775766b4cc-djq2k" [68c827ef-a8b1-4989-a80b-c06778c72b03] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-775766b4cc-djq2k" [68c827ef-a8b1-4989-a80b-c06778c72b03] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.013291809s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-923429 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.219.224 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-923429 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 service list
functional_test.go:1458: (dbg) Done: out/minikube-linux-amd64 -p functional-923429 service list: (1.674954857s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 service list -o json
functional_test.go:1488: (dbg) Done: out/minikube-linux-amd64 -p functional-923429 service list -o json: (1.669178644s)
functional_test.go:1493: Took "1.669282698s" to run "out/minikube-linux-amd64 -p functional-923429 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:32210
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-923429 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:32210
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-923429
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-923429
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-923429
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (68.15s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-218089 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0821 10:43:37.060074   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-218089 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m8.14621348s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (68.15s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.24s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-218089 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-218089 addons enable ingress --alsologtostderr -v=5: (10.241679368s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.24s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.52s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-218089 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.52s)

                                                
                                    
x
+
TestJSONOutput/start/Command (67.76s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-404378 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0821 10:47:41.274180   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/functional-923429/client.crt: no such file or directory
E0821 10:48:01.754881   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/functional-923429/client.crt: no such file or directory
E0821 10:48:42.715519   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/functional-923429/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-404378 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m7.763124704s)
--- PASS: TestJSONOutput/start/Command (67.76s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-404378 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.56s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-404378 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.56s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.72s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-404378 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-404378 --output=json --user=testUser: (5.723783102s)
--- PASS: TestJSONOutput/stop/Command (5.72s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-386844 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-386844 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (57.675757ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5bce4f36-7c6c-44fe-99b4-0a637abaaaca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-386844] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6aec0ddd-8676-4efa-926f-b35a32b4ddf1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17102"}}
	{"specversion":"1.0","id":"6c153491-cf40-4c0a-a45f-3fdd3e32e164","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6dba09d6-d0c5-4786-b7d8-4f027600d7a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17102-5717/kubeconfig"}}
	{"specversion":"1.0","id":"c2ced757-643b-4e71-9b6e-33cf87c25e38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-5717/.minikube"}}
	{"specversion":"1.0","id":"d62cf5a7-a256-498e-ae3e-bcddd1f8cb3f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"c9d07a9a-337f-4d87-abc5-2581d952e78f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8d72aa4d-ece9-4634-8ca5-b269344c6782","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-386844" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-386844
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (31.14s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-443363 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-443363 --network=: (29.082203258s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-443363" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-443363
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-443363: (2.036383344s)
--- PASS: TestKicCustomNetwork/create_custom_network (31.14s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.24s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-691374 --network=bridge
E0821 10:49:33.389558   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/client.crt: no such file or directory
E0821 10:49:33.394810   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/client.crt: no such file or directory
E0821 10:49:33.405058   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/client.crt: no such file or directory
E0821 10:49:33.425298   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/client.crt: no such file or directory
E0821 10:49:33.465565   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/client.crt: no such file or directory
E0821 10:49:33.545860   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/client.crt: no such file or directory
E0821 10:49:33.706258   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/client.crt: no such file or directory
E0821 10:49:34.026773   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/client.crt: no such file or directory
E0821 10:49:34.667653   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/client.crt: no such file or directory
E0821 10:49:35.948209   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/client.crt: no such file or directory
E0821 10:49:38.508472   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/client.crt: no such file or directory
E0821 10:49:43.629440   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-691374 --network=bridge: (22.395038849s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-691374" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-691374
E0821 10:49:53.870473   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-691374: (1.830105893s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.24s)

                                                
                                    
x
+
TestKicExistingNetwork (27.08s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-623979 --network=existing-network
E0821 10:50:04.636810   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/functional-923429/client.crt: no such file or directory
E0821 10:50:14.350933   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-623979 --network=existing-network: (25.09791753s)
helpers_test.go:175: Cleaning up "existing-network-623979" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-623979
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-623979: (1.848744898s)
--- PASS: TestKicExistingNetwork (27.08s)

                                                
                                    
x
+
TestKicCustomSubnet (24.26s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-480888 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-480888 --subnet=192.168.60.0/24: (22.262828667s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-480888 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-480888" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-480888
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-480888: (1.980837594s)
--- PASS: TestKicCustomSubnet (24.26s)

                                                
                                    
x
+
TestKicStaticIP (23.49s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-198410 --static-ip=192.168.200.200
E0821 10:50:53.215744   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/client.crt: no such file or directory
E0821 10:50:55.311483   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-198410 --static-ip=192.168.200.200: (21.319176656s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-198410 ip
helpers_test.go:175: Cleaning up "static-ip-198410" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-198410
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-198410: (2.056288528s)
--- PASS: TestKicStaticIP (23.49s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (52.71s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-023976 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-023976 --driver=docker  --container-runtime=crio: (23.796358023s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-027506 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-027506 --driver=docker  --container-runtime=crio: (24.338519539s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-023976
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-027506
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-027506" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-027506
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-027506: (1.818435242s)
helpers_test.go:175: Cleaning up "first-023976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-023976
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-023976: (1.809627315s)
--- PASS: TestMinikubeProfile (52.71s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.97s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-801235 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-801235 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.969262174s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-801235 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.12s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-816149 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-816149 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.12438908s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-816149 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-801235 --alsologtostderr -v=5
E0821 10:52:17.232203   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/client.crt: no such file or directory
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-801235 --alsologtostderr -v=5: (1.606633071s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-816149 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-816149
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-816149: (1.174835794s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.95s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-816149
E0821 10:52:20.792578   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/functional-923429/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-816149: (5.9540102s)
--- PASS: TestMountStart/serial/RestartStopped (6.95s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-816149 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (97.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-200985 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0821 10:52:48.477983   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/functional-923429/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-200985 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m37.068465593s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (97.49s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200985 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200985 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-200985 -- rollout status deployment/busybox: (1.81133859s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200985 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200985 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200985 -- exec busybox-67b7f59bb-4kkp2 -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200985 -- exec busybox-67b7f59bb-vtjvj -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200985 -- exec busybox-67b7f59bb-4kkp2 -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200985 -- exec busybox-67b7f59bb-vtjvj -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200985 -- exec busybox-67b7f59bb-4kkp2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-200985 -- exec busybox-67b7f59bb-vtjvj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.52s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-200985 -v 3 --alsologtostderr
E0821 10:54:33.388999   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/client.crt: no such file or directory
E0821 10:55:01.073200   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-200985 -v 3 --alsologtostderr: (49.796940044s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.35s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 cp testdata/cp-test.txt multinode-200985:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 ssh -n multinode-200985 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 cp multinode-200985:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2097216110/001/cp-test_multinode-200985.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 ssh -n multinode-200985 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 cp multinode-200985:/home/docker/cp-test.txt multinode-200985-m02:/home/docker/cp-test_multinode-200985_multinode-200985-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 ssh -n multinode-200985 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 ssh -n multinode-200985-m02 "sudo cat /home/docker/cp-test_multinode-200985_multinode-200985-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 cp multinode-200985:/home/docker/cp-test.txt multinode-200985-m03:/home/docker/cp-test_multinode-200985_multinode-200985-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 ssh -n multinode-200985 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 ssh -n multinode-200985-m03 "sudo cat /home/docker/cp-test_multinode-200985_multinode-200985-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 cp testdata/cp-test.txt multinode-200985-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 ssh -n multinode-200985-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 cp multinode-200985-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2097216110/001/cp-test_multinode-200985-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 ssh -n multinode-200985-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 cp multinode-200985-m02:/home/docker/cp-test.txt multinode-200985:/home/docker/cp-test_multinode-200985-m02_multinode-200985.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 ssh -n multinode-200985-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 ssh -n multinode-200985 "sudo cat /home/docker/cp-test_multinode-200985-m02_multinode-200985.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 cp multinode-200985-m02:/home/docker/cp-test.txt multinode-200985-m03:/home/docker/cp-test_multinode-200985-m02_multinode-200985-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 ssh -n multinode-200985-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 ssh -n multinode-200985-m03 "sudo cat /home/docker/cp-test_multinode-200985-m02_multinode-200985-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 cp testdata/cp-test.txt multinode-200985-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 ssh -n multinode-200985-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 cp multinode-200985-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2097216110/001/cp-test_multinode-200985-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 ssh -n multinode-200985-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 cp multinode-200985-m03:/home/docker/cp-test.txt multinode-200985:/home/docker/cp-test_multinode-200985-m03_multinode-200985.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 ssh -n multinode-200985-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 ssh -n multinode-200985 "sudo cat /home/docker/cp-test_multinode-200985-m03_multinode-200985.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 cp multinode-200985-m03:/home/docker/cp-test.txt multinode-200985-m02:/home/docker/cp-test_multinode-200985-m03_multinode-200985-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 ssh -n multinode-200985-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 ssh -n multinode-200985-m02 "sudo cat /home/docker/cp-test_multinode-200985-m03_multinode-200985-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.49s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-200985 node stop m03: (1.182200825s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-200985 status: exit status 7 (436.512795ms)

                                                
                                                
-- stdout --
	multinode-200985
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-200985-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-200985-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-200985 status --alsologtostderr: exit status 7 (430.472622ms)

                                                
                                                
-- stdout --
	multinode-200985
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-200985-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-200985-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 10:55:13.486271  110524 out.go:296] Setting OutFile to fd 1 ...
	I0821 10:55:13.486427  110524 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 10:55:13.486437  110524 out.go:309] Setting ErrFile to fd 2...
	I0821 10:55:13.486444  110524 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 10:55:13.486653  110524 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-5717/.minikube/bin
	I0821 10:55:13.486844  110524 out.go:303] Setting JSON to false
	I0821 10:55:13.486886  110524 mustload.go:65] Loading cluster: multinode-200985
	I0821 10:55:13.486992  110524 notify.go:220] Checking for updates...
	I0821 10:55:13.487279  110524 config.go:182] Loaded profile config "multinode-200985": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0821 10:55:13.487294  110524 status.go:255] checking status of multinode-200985 ...
	I0821 10:55:13.487721  110524 cli_runner.go:164] Run: docker container inspect multinode-200985 --format={{.State.Status}}
	I0821 10:55:13.505351  110524 status.go:330] multinode-200985 host status = "Running" (err=<nil>)
	I0821 10:55:13.505386  110524 host.go:66] Checking if "multinode-200985" exists ...
	I0821 10:55:13.505606  110524 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-200985
	I0821 10:55:13.521062  110524 host.go:66] Checking if "multinode-200985" exists ...
	I0821 10:55:13.521285  110524 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0821 10:55:13.521326  110524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-200985
	I0821 10:55:13.535741  110524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/multinode-200985/id_rsa Username:docker}
	I0821 10:55:13.624319  110524 ssh_runner.go:195] Run: systemctl --version
	I0821 10:55:13.627967  110524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0821 10:55:13.637696  110524 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 10:55:13.688661  110524 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:56 SystemTime:2023-08-21 10:55:13.680531738 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0821 10:55:13.689156  110524 kubeconfig.go:92] found "multinode-200985" server: "https://192.168.58.2:8443"
	I0821 10:55:13.689176  110524 api_server.go:166] Checking apiserver status ...
	I0821 10:55:13.689206  110524 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0821 10:55:13.698673  110524 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1444/cgroup
	I0821 10:55:13.706589  110524 api_server.go:182] apiserver freezer: "6:freezer:/docker/30a11af662edf967ffb99de2ef034ce516ea0aacab8a798c9436236a541bf91a/crio/crio-f27c8ba185223b21c117e1052ef91419930702ebac1c82eaa88b5af83db72c75"
	I0821 10:55:13.706659  110524 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/30a11af662edf967ffb99de2ef034ce516ea0aacab8a798c9436236a541bf91a/crio/crio-f27c8ba185223b21c117e1052ef91419930702ebac1c82eaa88b5af83db72c75/freezer.state
	I0821 10:55:13.713721  110524 api_server.go:204] freezer state: "THAWED"
	I0821 10:55:13.713748  110524 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0821 10:55:13.719022  110524 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0821 10:55:13.719044  110524 status.go:421] multinode-200985 apiserver status = Running (err=<nil>)
	I0821 10:55:13.719054  110524 status.go:257] multinode-200985 status: &{Name:multinode-200985 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0821 10:55:13.719073  110524 status.go:255] checking status of multinode-200985-m02 ...
	I0821 10:55:13.719287  110524 cli_runner.go:164] Run: docker container inspect multinode-200985-m02 --format={{.State.Status}}
	I0821 10:55:13.735163  110524 status.go:330] multinode-200985-m02 host status = "Running" (err=<nil>)
	I0821 10:55:13.735180  110524 host.go:66] Checking if "multinode-200985-m02" exists ...
	I0821 10:55:13.735490  110524 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-200985-m02
	I0821 10:55:13.750096  110524 host.go:66] Checking if "multinode-200985-m02" exists ...
	I0821 10:55:13.750379  110524 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0821 10:55:13.750426  110524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-200985-m02
	I0821 10:55:13.765351  110524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17102-5717/.minikube/machines/multinode-200985-m02/id_rsa Username:docker}
	I0821 10:55:13.851973  110524 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0821 10:55:13.861471  110524 status.go:257] multinode-200985-m02 status: &{Name:multinode-200985-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0821 10:55:13.861498  110524 status.go:255] checking status of multinode-200985-m03 ...
	I0821 10:55:13.861730  110524 cli_runner.go:164] Run: docker container inspect multinode-200985-m03 --format={{.State.Status}}
	I0821 10:55:13.877175  110524 status.go:330] multinode-200985-m03 host status = "Stopped" (err=<nil>)
	I0821 10:55:13.877194  110524 status.go:343] host is not running, skipping remaining checks
	I0821 10:55:13.877202  110524 status.go:257] multinode-200985-m03 status: &{Name:multinode-200985-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.05s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-200985 node start m03 --alsologtostderr: (9.835915218s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.48s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (111.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-200985
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-200985
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-200985: (24.705286793s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-200985 --wait=true -v=8 --alsologtostderr
E0821 10:55:53.215334   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-200985 --wait=true -v=8 --alsologtostderr: (1m27.040950571s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-200985
--- PASS: TestMultiNode/serial/RestartKeepsNodes (111.82s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 node delete m03
E0821 10:57:16.260969   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/client.crt: no such file or directory
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-200985 node delete m03: (4.006394122s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.56s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 stop
E0821 10:57:20.792936   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/functional-923429/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-200985 stop: (23.605045773s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-200985 status: exit status 7 (76.105328ms)

                                                
                                                
-- stdout --
	multinode-200985
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-200985-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-200985 status --alsologtostderr: exit status 7 (71.693167ms)

                                                
                                                
-- stdout --
	multinode-200985
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-200985-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 10:57:44.455812  120693 out.go:296] Setting OutFile to fd 1 ...
	I0821 10:57:44.455942  120693 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 10:57:44.455950  120693 out.go:309] Setting ErrFile to fd 2...
	I0821 10:57:44.455955  120693 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 10:57:44.456148  120693 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-5717/.minikube/bin
	I0821 10:57:44.456299  120693 out.go:303] Setting JSON to false
	I0821 10:57:44.456335  120693 mustload.go:65] Loading cluster: multinode-200985
	I0821 10:57:44.456432  120693 notify.go:220] Checking for updates...
	I0821 10:57:44.456689  120693 config.go:182] Loaded profile config "multinode-200985": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0821 10:57:44.456700  120693 status.go:255] checking status of multinode-200985 ...
	I0821 10:57:44.457031  120693 cli_runner.go:164] Run: docker container inspect multinode-200985 --format={{.State.Status}}
	I0821 10:57:44.473465  120693 status.go:330] multinode-200985 host status = "Stopped" (err=<nil>)
	I0821 10:57:44.473489  120693 status.go:343] host is not running, skipping remaining checks
	I0821 10:57:44.473497  120693 status.go:257] multinode-200985 status: &{Name:multinode-200985 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0821 10:57:44.473551  120693 status.go:255] checking status of multinode-200985-m02 ...
	I0821 10:57:44.473867  120693 cli_runner.go:164] Run: docker container inspect multinode-200985-m02 --format={{.State.Status}}
	I0821 10:57:44.489322  120693 status.go:330] multinode-200985-m02 host status = "Stopped" (err=<nil>)
	I0821 10:57:44.489339  120693 status.go:343] host is not running, skipping remaining checks
	I0821 10:57:44.489345  120693 status.go:257] multinode-200985-m02 status: &{Name:multinode-200985-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.75s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (76.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-200985 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-200985 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m15.656233537s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-200985 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (76.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (22.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-200985
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-200985-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-200985-m02 --driver=docker  --container-runtime=crio: exit status 14 (59.846234ms)

                                                
                                                
-- stdout --
	* [multinode-200985-m02] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17102
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17102-5717/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-5717/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-200985-m02' is duplicated with machine name 'multinode-200985-m02' in profile 'multinode-200985'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-200985-m03 --driver=docker  --container-runtime=crio
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-200985-m03 --driver=docker  --container-runtime=crio: (20.823277356s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-200985
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-200985: exit status 80 (247.702615ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-200985
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-200985-m03 already exists in multinode-200985-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-200985-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-200985-m03: (1.800562301s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (22.97s)

                                                
                                    
x
+
TestPreload (143.2s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-058610 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0821 10:59:33.389251   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-058610 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m9.195092008s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-058610 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-058610
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-058610: (5.601179491s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-058610 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0821 11:00:53.215662   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-058610 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m5.257780707s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-058610 image list
helpers_test.go:175: Cleaning up "test-preload-058610" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-058610
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-058610: (2.229051726s)
--- PASS: TestPreload (143.20s)

                                                
                                    
x
+
TestScheduledStopUnix (99.8s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-439768 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-439768 --memory=2048 --driver=docker  --container-runtime=crio: (23.560716025s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-439768 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-439768 -n scheduled-stop-439768
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-439768 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-439768 --cancel-scheduled
E0821 11:02:20.792247   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/functional-923429/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-439768 -n scheduled-stop-439768
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-439768
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-439768 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-439768
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-439768: exit status 7 (58.361117ms)

                                                
                                                
-- stdout --
	scheduled-stop-439768
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-439768 -n scheduled-stop-439768
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-439768 -n scheduled-stop-439768: exit status 7 (55.96627ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-439768" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-439768
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-439768: (5.032247495s)
--- PASS: TestScheduledStopUnix (99.80s)

                                                
                                    
x
+
TestInsufficientStorage (9.81s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-332548 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-332548 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.522229396s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1682c04a-1b01-488d-8e34-8fb50fedb2d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-332548] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5fb05707-b97e-4cab-84c3-ed10fd95a945","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17102"}}
	{"specversion":"1.0","id":"9d0cc7d1-e4dc-436d-be4a-54ddec5daedb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8b234880-1a7a-4fe2-942b-3cffb4484cb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17102-5717/kubeconfig"}}
	{"specversion":"1.0","id":"6748c91f-a4bc-4624-b3d5-6c56ef40f0ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-5717/.minikube"}}
	{"specversion":"1.0","id":"49c57826-1d0d-47b8-a110-f06b99a1df63","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"430d09a7-fb36-46fb-a4bf-c9096860dd7a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"41e71b83-1fef-4b6e-a153-354a7db9a527","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"17904937-7469-45d7-9b6c-ae8a4a1121ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"dda3a31c-4f54-4e22-821f-6f44862e2975","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f06a8ab5-0e0c-4225-80fe-21498acc71c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"af78dce0-bc91-4f32-98ff-ceb5c878acf6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-332548 in cluster insufficient-storage-332548","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f8e921cd-ce37-4624-8835-365f5399e5ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"64f73c04-b3f2-47b3-8095-2eaf1b15d663","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"fa3f5a92-4dd1-4238-b21b-37ecc63dc693","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-332548 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-332548 --output=json --layout=cluster: exit status 7 (246.754398ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-332548","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-332548","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0821 11:03:39.889089  142347 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-332548" does not appear in /home/jenkins/minikube-integration/17102-5717/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-332548 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-332548 --output=json --layout=cluster: exit status 7 (246.450178ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-332548","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-332548","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0821 11:03:40.136015  142435 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-332548" does not appear in /home/jenkins/minikube-integration/17102-5717/kubeconfig
	E0821 11:03:40.145557  142435 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/insufficient-storage-332548/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-332548" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-332548
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-332548: (1.790787369s)
--- PASS: TestInsufficientStorage (9.81s)

                                                
                                    
x
+
TestKubernetesUpgrade (356.27s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-433377 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-433377 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (51.829986774s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-433377
version_upgrade_test.go:239: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-433377: (2.971785483s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-433377 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-433377 status --format={{.Host}}: exit status 7 (61.662537ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-433377 --memory=2200 --kubernetes-version=v1.28.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:255: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-433377 --memory=2200 --kubernetes-version=v1.28.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m31.432836042s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-433377 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-433377 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-433377 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (66.803591ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-433377] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17102
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17102-5717/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-5717/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.0-rc.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-433377
	    minikube start -p kubernetes-upgrade-433377 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4333772 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.0-rc.1, by running:
	    
	    minikube start -p kubernetes-upgrade-433377 --kubernetes-version=v1.28.0-rc.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-433377 --memory=2200 --kubernetes-version=v1.28.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:287: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-433377 --memory=2200 --kubernetes-version=v1.28.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.640006917s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-433377" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-433377
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-433377: (2.207521571s)
--- PASS: TestKubernetesUpgrade (356.27s)

                                                
                                    
x
+
TestMissingContainerUpgrade (162.86s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:321: (dbg) Run:  /tmp/minikube-v1.9.0.3653248283.exe start -p missing-upgrade-586789 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:321: (dbg) Done: /tmp/minikube-v1.9.0.3653248283.exe start -p missing-upgrade-586789 --memory=2200 --driver=docker  --container-runtime=crio: (1m25.258645124s)
version_upgrade_test.go:330: (dbg) Run:  docker stop missing-upgrade-586789
version_upgrade_test.go:330: (dbg) Done: docker stop missing-upgrade-586789: (11.380738699s)
version_upgrade_test.go:335: (dbg) Run:  docker rm missing-upgrade-586789
version_upgrade_test.go:341: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-586789 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:341: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-586789 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m1.86724698s)
helpers_test.go:175: Cleaning up "missing-upgrade-586789" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-586789
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-586789: (3.948825516s)
--- PASS: TestMissingContainerUpgrade (162.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-611637 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-611637 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (67.758032ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-611637] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17102
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17102-5717/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-5717/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (37.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-611637 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-611637 --driver=docker  --container-runtime=crio: (37.309441085s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-611637 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (37.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-611637 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-611637 --no-kubernetes --driver=docker  --container-runtime=crio: (6.944101761s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-611637 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-611637 status -o json: exit status 2 (347.594563ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-611637","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-611637
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-611637: (1.987306243s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-611637 --no-kubernetes --driver=docker  --container-runtime=crio
E0821 11:04:33.389082   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-611637 --no-kubernetes --driver=docker  --container-runtime=crio: (6.584464177s)
--- PASS: TestNoKubernetes/serial/Start (6.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-611637 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-611637 "sudo systemctl is-active --quiet service kubelet": exit status 1 (278.305736ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-611637
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-611637: (1.426750248s)
--- PASS: TestNoKubernetes/serial/Stop (1.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (9.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-611637 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-611637 --driver=docker  --container-runtime=crio: (9.539545081s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (9.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-611637 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-611637 "sudo systemctl is-active --quiet service kubelet": exit status 1 (274.984346ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-872088 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-872088 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (177.436177ms)

                                                
                                                
-- stdout --
	* [false-872088] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17102
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17102-5717/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-5717/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 11:04:53.625984  161602 out.go:296] Setting OutFile to fd 1 ...
	I0821 11:04:53.626095  161602 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:04:53.626103  161602 out.go:309] Setting ErrFile to fd 2...
	I0821 11:04:53.626108  161602 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:04:53.626291  161602 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-5717/.minikube/bin
	I0821 11:04:53.626834  161602 out.go:303] Setting JSON to false
	I0821 11:04:53.636463  161602 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2844,"bootTime":1692613050,"procs":356,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1039-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0821 11:04:53.636554  161602 start.go:138] virtualization: kvm guest
	I0821 11:04:53.639141  161602 out.go:177] * [false-872088] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0821 11:04:53.641035  161602 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 11:04:53.641146  161602 notify.go:220] Checking for updates...
	I0821 11:04:53.642426  161602 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 11:04:53.643998  161602 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17102-5717/kubeconfig
	I0821 11:04:53.645694  161602 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-5717/.minikube
	I0821 11:04:53.647946  161602 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0821 11:04:53.649800  161602 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 11:04:53.652640  161602 config.go:182] Loaded profile config "missing-upgrade-586789": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0821 11:04:53.652916  161602 config.go:182] Loaded profile config "offline-crio-575709": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0821 11:04:53.653056  161602 config.go:182] Loaded profile config "running-upgrade-619999": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0821 11:04:53.653305  161602 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 11:04:53.686244  161602 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0821 11:04:53.686366  161602 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 11:04:53.748940  161602 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:110 SystemTime:2023-08-21 11:04:53.737107159 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1039-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Arch
itecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0821 11:04:53.749092  161602 docker.go:294] overlay module found
	I0821 11:04:53.750960  161602 out.go:177] * Using the docker driver based on user configuration
	I0821 11:04:53.752538  161602 start.go:298] selected driver: docker
	I0821 11:04:53.752560  161602 start.go:902] validating driver "docker" against <nil>
	I0821 11:04:53.752575  161602 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 11:04:53.755100  161602 out.go:177] 
	W0821 11:04:53.756552  161602 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0821 11:04:53.757843  161602 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-872088 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-872088

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-872088

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-872088

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-872088

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-872088

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-872088

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-872088

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-872088

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-872088

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-872088

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-872088"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-872088"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-872088"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-872088

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-872088"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-872088"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-872088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-872088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-872088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-872088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-872088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-872088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-872088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-872088" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-872088"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-872088"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-872088"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-872088"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-872088"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-872088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-872088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-872088" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-872088"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-872088"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-872088"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-872088"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-872088"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17102-5717/.minikube/ca.crt
server: https://127.0.0.1:32926
name: missing-upgrade-586789
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17102-5717/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 21 Aug 2023 11:04:25 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: offline-crio-575709
contexts:
- context:
cluster: missing-upgrade-586789
user: missing-upgrade-586789
name: missing-upgrade-586789
- context:
cluster: offline-crio-575709
extensions:
- extension:
last-update: Mon, 21 Aug 2023 11:04:25 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: offline-crio-575709
name: offline-crio-575709
current-context: offline-crio-575709
kind: Config
preferences: {}
users:
- name: missing-upgrade-586789
user:
client-certificate: /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/missing-upgrade-586789/client.crt
client-key: /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/missing-upgrade-586789/client.key
- name: offline-crio-575709
user:
client-certificate: /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/offline-crio-575709/client.crt
client-key: /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/offline-crio-575709/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-872088

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-872088"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-872088"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-872088"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-872088"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-872088"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-872088"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-872088"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-872088"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-872088"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-872088"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-872088"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-872088"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-872088"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-872088"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-872088"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-872088"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-872088"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-872088"

                                                
                                                
----------------------- debugLogs end: false-872088 [took: 3.071267437s] --------------------------------
helpers_test.go:175: Cleaning up "false-872088" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-872088
--- PASS: TestNetworkPlugins/group/false (3.44s)

                                                
                                    
x
+
TestPause/serial/Start (45.77s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-942142 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-942142 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (45.772524848s)
--- PASS: TestPause/serial/Start (45.77s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (69.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-872088 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0821 11:07:20.793014   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/functional-923429/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-872088 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m9.31220301s)
--- PASS: TestNetworkPlugins/group/auto/Start (69.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-212049
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (56.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-872088 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-872088 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (56.218636871s)
--- PASS: TestNetworkPlugins/group/flannel/Start (56.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-872088 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-872088 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-fxjnr" [64b1b084-196c-4aa8-b592-12d22e6416c3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-fxjnr" [64b1b084-196c-4aa8-b592-12d22e6416c3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.009091552s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-872088 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-872088 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-872088 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-t5fbs" [feeef892-ad4d-4f25-8e86-b5bcaac8d833] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.020971502s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-872088 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-872088 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-9zlx8" [ce79fb94-7bb6-4d1d-9e3d-8c30bf89940c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-9zlx8" [ce79fb94-7bb6-4d1d-9e3d-8c30bf89940c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.009170346s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (42.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-872088 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-872088 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (42.341971555s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (42.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-872088 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-872088 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-872088 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (71.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-872088 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-872088 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m11.990086328s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (71.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (34.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-872088 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0821 11:09:33.389346   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-872088 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (34.167416787s)
--- PASS: TestNetworkPlugins/group/bridge/Start (34.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-872088 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-872088 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-8hcql" [a7a35850-21ce-4b75-ae49-a32b123e8ed5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-8hcql" [a7a35850-21ce-4b75-ae49-a32b123e8ed5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.009050284s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (21.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-872088 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context enable-default-cni-872088 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.143756146s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-872088 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context enable-default-cni-872088 exec deployment/netcat -- nslookup kubernetes.default: (5.170631222s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (21.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-872088 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-872088 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-s95kx" [298e1d9a-b3ba-48a2-bbb1-33aeebdaac8d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-s95kx" [298e1d9a-b3ba-48a2-bbb1-33aeebdaac8d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.008372644s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-872088 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-872088 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-872088 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-872088 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-872088 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (64.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-872088 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-872088 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m4.857761615s)
--- PASS: TestNetworkPlugins/group/calico/Start (64.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-w2gbd" [c84571ae-5176-4cc5-ab00-751c46901e41] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.021045656s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-872088 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-872088 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-rc4cz" [9f001dc7-1a24-4e94-bb50-c9763881e6c9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-rc4cz" [9f001dc7-1a24-4e94-bb50-c9763881e6c9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.009483524s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (61.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-872088 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-872088 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m1.923036268s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (61.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-872088 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-872088 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-872088 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (135.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-264812 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-264812 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m15.254582839s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (135.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-fx9q5" [b37a0d90-ce30-4f54-a47b-47f4914290f2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.02257259s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-872088 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-872088 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-kpz2k" [8c04403c-7097-49b4-8aef-ac1fa8568ab1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-kpz2k" [8c04403c-7097-49b4-8aef-ac1fa8568ab1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.010896999s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-872088 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-872088 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-qzkmj" [ebcbb548-78ca-4d13-b419-73080e2d5e92] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-qzkmj" [ebcbb548-78ca-4d13-b419-73080e2d5e92] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.079117077s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-872088 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-872088 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-872088 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-872088 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-872088 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-872088 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)
E0821 11:16:07.903627   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/auto-872088/client.crt: no such file or directory
E0821 11:16:12.657676   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/bridge-872088/client.crt: no such file or directory
E0821 11:16:23.023021   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/flannel-872088/client.crt: no such file or directory
E0821 11:16:24.312401   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/calico-872088/client.crt: no such file or directory
E0821 11:16:24.317658   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/calico-872088/client.crt: no such file or directory
E0821 11:16:24.327923   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/calico-872088/client.crt: no such file or directory
E0821 11:16:24.348212   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/calico-872088/client.crt: no such file or directory
E0821 11:16:24.388499   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/calico-872088/client.crt: no such file or directory
E0821 11:16:24.469474   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/calico-872088/client.crt: no such file or directory
E0821 11:16:24.629845   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/calico-872088/client.crt: no such file or directory
E0821 11:16:24.950324   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/calico-872088/client.crt: no such file or directory
E0821 11:16:25.591182   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/calico-872088/client.crt: no such file or directory
E0821 11:16:26.872066   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/calico-872088/client.crt: no such file or directory
E0821 11:16:28.939885   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/custom-flannel-872088/client.crt: no such file or directory
E0821 11:16:28.945143   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/custom-flannel-872088/client.crt: no such file or directory
E0821 11:16:28.955420   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/custom-flannel-872088/client.crt: no such file or directory
E0821 11:16:28.975673   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/custom-flannel-872088/client.crt: no such file or directory
E0821 11:16:29.015971   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/custom-flannel-872088/client.crt: no such file or directory
E0821 11:16:29.096282   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/custom-flannel-872088/client.crt: no such file or directory
E0821 11:16:29.257400   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/custom-flannel-872088/client.crt: no such file or directory
E0821 11:16:29.432564   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/calico-872088/client.crt: no such file or directory
E0821 11:16:29.577759   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/custom-flannel-872088/client.crt: no such file or directory
E0821 11:16:30.218668   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/custom-flannel-872088/client.crt: no such file or directory
E0821 11:16:31.499492   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/custom-flannel-872088/client.crt: no such file or directory
E0821 11:16:34.060495   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/custom-flannel-872088/client.crt: no such file or directory
E0821 11:16:34.552863   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/calico-872088/client.crt: no such file or directory
E0821 11:16:39.181390   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/custom-flannel-872088/client.crt: no such file or directory
E0821 11:16:42.662602   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/kindnet-872088/client.crt: no such file or directory
E0821 11:16:44.793070   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/calico-872088/client.crt: no such file or directory
E0821 11:16:49.422215   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/custom-flannel-872088/client.crt: no such file or directory
E0821 11:17:05.273568   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/calico-872088/client.crt: no such file or directory
E0821 11:17:09.902635   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/custom-flannel-872088/client.crt: no such file or directory
E0821 11:17:18.989725   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/enable-default-cni-872088/client.crt: no such file or directory
E0821 11:17:20.792881   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/functional-923429/client.crt: no such file or directory
E0821 11:17:34.577907   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/bridge-872088/client.crt: no such file or directory
E0821 11:17:46.233930   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/calico-872088/client.crt: no such file or directory
E0821 11:17:50.862999   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/custom-flannel-872088/client.crt: no such file or directory
E0821 11:18:04.583452   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/kindnet-872088/client.crt: no such file or directory
E0821 11:18:12.278770   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/old-k8s-version-264812/client.crt: no such file or directory
E0821 11:18:12.284047   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/old-k8s-version-264812/client.crt: no such file or directory
E0821 11:18:12.294319   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/old-k8s-version-264812/client.crt: no such file or directory
E0821 11:18:12.314574   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/old-k8s-version-264812/client.crt: no such file or directory
E0821 11:18:12.354840   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/old-k8s-version-264812/client.crt: no such file or directory
E0821 11:18:12.435162   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/old-k8s-version-264812/client.crt: no such file or directory
E0821 11:18:12.595446   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/old-k8s-version-264812/client.crt: no such file or directory
E0821 11:18:12.916003   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/old-k8s-version-264812/client.crt: no such file or directory
E0821 11:18:13.556163   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/old-k8s-version-264812/client.crt: no such file or directory
E0821 11:18:14.836889   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/old-k8s-version-264812/client.crt: no such file or directory
E0821 11:18:17.397580   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/old-k8s-version-264812/client.crt: no such file or directory
E0821 11:18:22.518480   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/old-k8s-version-264812/client.crt: no such file or directory
E0821 11:18:24.061069   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/auto-872088/client.crt: no such file or directory
E0821 11:18:32.758766   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/old-k8s-version-264812/client.crt: no such file or directory
E0821 11:18:39.179995   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/flannel-872088/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (66.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-481070 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0-rc.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-481070 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0-rc.1: (1m6.119829731s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (66.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (71.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-742211 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-742211 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.4: (1m11.024236598s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (71.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-476305 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.4
E0821 11:12:20.792268   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/functional-923429/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-476305 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.4: (42.416491799s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-476305 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2ee37f3b-f797-4f8e-b5da-e7b72ec823c5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2ee37f3b-f797-4f8e-b5da-e7b72ec823c5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.014816421s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-476305 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-476305 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-476305 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-476305 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-476305 --alsologtostderr -v=3: (12.324209331s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-481070 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [497535e8-206e-4ef4-b46a-007d86f44cc0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [497535e8-206e-4ef4-b46a-007d86f44cc0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.014827605s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-481070 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-476305 -n default-k8s-diff-port-476305
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-476305 -n default-k8s-diff-port-476305: exit status 7 (61.85581ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-476305 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (338.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-476305 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-476305 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.4: (5m38.205828996s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-476305 -n default-k8s-diff-port-476305
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (338.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-264812 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f6cb75c9-f7a3-46a8-ae35-77cf770665f3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f6cb75c9-f7a3-46a8-ae35-77cf770665f3] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.013494349s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-264812 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-481070 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-481070 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-742211 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fd653a02-3221-41f8-87ef-64042253a733] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fd653a02-3221-41f8-87ef-64042253a733] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.028181097s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-742211 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-481070 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-481070 --alsologtostderr -v=3: (11.97556795s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-264812 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-264812 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-742211 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-742211 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-264812 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-264812 --alsologtostderr -v=3: (11.916916885s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-742211 --alsologtostderr -v=3
E0821 11:13:24.060793   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/auto-872088/client.crt: no such file or directory
E0821 11:13:24.066081   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/auto-872088/client.crt: no such file or directory
E0821 11:13:24.076349   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/auto-872088/client.crt: no such file or directory
E0821 11:13:24.097382   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/auto-872088/client.crt: no such file or directory
E0821 11:13:24.137681   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/auto-872088/client.crt: no such file or directory
E0821 11:13:24.218180   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/auto-872088/client.crt: no such file or directory
E0821 11:13:24.378675   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/auto-872088/client.crt: no such file or directory
E0821 11:13:24.699300   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/auto-872088/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-742211 --alsologtostderr -v=3: (11.943118753s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-481070 -n no-preload-481070
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-481070 -n no-preload-481070: exit status 7 (60.707978ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-481070 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0821 11:13:25.340045   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/auto-872088/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (337.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-481070 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0-rc.1
E0821 11:13:26.620256   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/auto-872088/client.crt: no such file or directory
E0821 11:13:29.181271   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/auto-872088/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-481070 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0-rc.1: (5m37.032657178s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-481070 -n no-preload-481070
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (337.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-264812 -n old-k8s-version-264812
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-264812 -n old-k8s-version-264812: exit status 7 (64.002035ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-264812 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (66.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-264812 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-264812 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (1m6.403155227s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-264812 -n old-k8s-version-264812
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (66.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-742211 -n embed-certs-742211
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-742211 -n embed-certs-742211: exit status 7 (70.065309ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-742211 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (339.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-742211 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.4
E0821 11:13:34.301987   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/auto-872088/client.crt: no such file or directory
E0821 11:13:39.179287   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/flannel-872088/client.crt: no such file or directory
E0821 11:13:39.184529   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/flannel-872088/client.crt: no such file or directory
E0821 11:13:39.194765   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/flannel-872088/client.crt: no such file or directory
E0821 11:13:39.215031   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/flannel-872088/client.crt: no such file or directory
E0821 11:13:39.255998   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/flannel-872088/client.crt: no such file or directory
E0821 11:13:39.336443   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/flannel-872088/client.crt: no such file or directory
E0821 11:13:39.497271   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/flannel-872088/client.crt: no such file or directory
E0821 11:13:39.817976   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/flannel-872088/client.crt: no such file or directory
E0821 11:13:40.458778   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/flannel-872088/client.crt: no such file or directory
E0821 11:13:41.739068   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/flannel-872088/client.crt: no such file or directory
E0821 11:13:44.299445   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/flannel-872088/client.crt: no such file or directory
E0821 11:13:44.542398   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/auto-872088/client.crt: no such file or directory
E0821 11:13:49.420481   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/flannel-872088/client.crt: no such file or directory
E0821 11:13:56.261215   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/client.crt: no such file or directory
E0821 11:13:59.661212   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/flannel-872088/client.crt: no such file or directory
E0821 11:14:05.022823   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/auto-872088/client.crt: no such file or directory
E0821 11:14:20.141967   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/flannel-872088/client.crt: no such file or directory
E0821 11:14:33.388764   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/ingress-addon-legacy-218089/client.crt: no such file or directory
E0821 11:14:35.146061   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/enable-default-cni-872088/client.crt: no such file or directory
E0821 11:14:35.151349   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/enable-default-cni-872088/client.crt: no such file or directory
E0821 11:14:35.161716   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/enable-default-cni-872088/client.crt: no such file or directory
E0821 11:14:35.182053   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/enable-default-cni-872088/client.crt: no such file or directory
E0821 11:14:35.222339   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/enable-default-cni-872088/client.crt: no such file or directory
E0821 11:14:35.302710   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/enable-default-cni-872088/client.crt: no such file or directory
E0821 11:14:35.463110   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/enable-default-cni-872088/client.crt: no such file or directory
E0821 11:14:35.783671   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/enable-default-cni-872088/client.crt: no such file or directory
E0821 11:14:36.424022   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/enable-default-cni-872088/client.crt: no such file or directory
E0821 11:14:37.705129   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/enable-default-cni-872088/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-742211 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.4: (5m39.056174927s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-742211 -n embed-certs-742211
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (339.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-q5mjn" [1acfc4a1-7dac-422d-a2ab-8ba9d33de36d] Running
E0821 11:14:40.265657   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/enable-default-cni-872088/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01477619s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-q5mjn" [1acfc4a1-7dac-422d-a2ab-8ba9d33de36d] Running
E0821 11:14:45.386705   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/enable-default-cni-872088/client.crt: no such file or directory
E0821 11:14:45.982964   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/auto-872088/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00760349s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-264812 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-264812 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-264812 --alsologtostderr -v=1
E0821 11:14:50.734514   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/bridge-872088/client.crt: no such file or directory
E0821 11:14:50.739635   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/bridge-872088/client.crt: no such file or directory
E0821 11:14:50.749908   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/bridge-872088/client.crt: no such file or directory
E0821 11:14:50.770169   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/bridge-872088/client.crt: no such file or directory
E0821 11:14:50.810740   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/bridge-872088/client.crt: no such file or directory
E0821 11:14:50.891685   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/bridge-872088/client.crt: no such file or directory
E0821 11:14:51.052707   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/bridge-872088/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-264812 -n old-k8s-version-264812
E0821 11:14:51.373488   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/bridge-872088/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-264812 -n old-k8s-version-264812: exit status 2 (276.657609ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-264812 -n old-k8s-version-264812
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-264812 -n old-k8s-version-264812: exit status 2 (277.023413ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-264812 --alsologtostderr -v=1
E0821 11:14:52.014101   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/bridge-872088/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-264812 -n old-k8s-version-264812
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-264812 -n old-k8s-version-264812
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (35.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-461266 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0-rc.1
E0821 11:14:55.855150   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/bridge-872088/client.crt: no such file or directory
E0821 11:15:00.976148   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/bridge-872088/client.crt: no such file or directory
E0821 11:15:01.102412   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/flannel-872088/client.crt: no such file or directory
E0821 11:15:11.217158   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/bridge-872088/client.crt: no such file or directory
E0821 11:15:16.108592   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/enable-default-cni-872088/client.crt: no such file or directory
E0821 11:15:20.739713   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/kindnet-872088/client.crt: no such file or directory
E0821 11:15:20.744976   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/kindnet-872088/client.crt: no such file or directory
E0821 11:15:20.755256   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/kindnet-872088/client.crt: no such file or directory
E0821 11:15:20.775618   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/kindnet-872088/client.crt: no such file or directory
E0821 11:15:20.815924   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/kindnet-872088/client.crt: no such file or directory
E0821 11:15:20.896224   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/kindnet-872088/client.crt: no such file or directory
E0821 11:15:21.056545   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/kindnet-872088/client.crt: no such file or directory
E0821 11:15:21.377425   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/kindnet-872088/client.crt: no such file or directory
E0821 11:15:22.017567   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/kindnet-872088/client.crt: no such file or directory
E0821 11:15:23.297758   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/kindnet-872088/client.crt: no such file or directory
E0821 11:15:25.858962   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/kindnet-872088/client.crt: no such file or directory
E0821 11:15:30.980133   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/kindnet-872088/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-461266 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0-rc.1: (35.69308044s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (35.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-461266 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0821 11:15:31.697316   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/bridge-872088/client.crt: no such file or directory
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-461266 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-461266 --alsologtostderr -v=3: (1.200947595s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-461266 -n newest-cni-461266
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-461266 -n newest-cni-461266: exit status 7 (60.35571ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-461266 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (25.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-461266 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0-rc.1
E0821 11:15:41.221155   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/kindnet-872088/client.crt: no such file or directory
E0821 11:15:53.215628   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/addons-351207/client.crt: no such file or directory
E0821 11:15:57.069185   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/enable-default-cni-872088/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-461266 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0-rc.1: (25.428161981s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-461266 -n newest-cni-461266
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (25.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-461266 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-461266 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-461266 -n newest-cni-461266
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-461266 -n newest-cni-461266: exit status 2 (289.408699ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-461266 -n newest-cni-461266
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-461266 -n newest-cni-461266: exit status 2 (284.011714ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-461266 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-461266 -n newest-cni-461266
E0821 11:16:01.701465   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/kindnet-872088/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-461266 -n newest-cni-461266
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-6mhdw" [02c9a3b5-751b-4b04-ba56-b4c6acd87a51] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-6mhdw" [02c9a3b5-751b-4b04-ba56-b4c6acd87a51] Running
E0821 11:18:51.743937   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/auto-872088/client.crt: no such file or directory
E0821 11:18:53.239945   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/old-k8s-version-264812/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.015955682s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-6mhdw" [02c9a3b5-751b-4b04-ba56-b4c6acd87a51] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01169634s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-476305 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-476305 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-476305 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-476305 -n default-k8s-diff-port-476305
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-476305 -n default-k8s-diff-port-476305: exit status 2 (326.959109ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-476305 -n default-k8s-diff-port-476305
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-476305 -n default-k8s-diff-port-476305: exit status 2 (307.019722ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-476305 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-476305 -n default-k8s-diff-port-476305
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-476305 -n default-k8s-diff-port-476305
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-hzl78" [46105c15-5dad-4559-a1d3-2bbbbb32a4a2] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-hzl78" [46105c15-5dad-4559-a1d3-2bbbbb32a4a2] Running
E0821 11:19:12.783885   12460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/custom-flannel-872088/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.017614074s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (9.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-gtbpf" [621ef0ed-cb1e-4825-a22b-67d673e9ea5d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-gtbpf" [621ef0ed-cb1e-4825-a22b-67d673e9ea5d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.017563952s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (9.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-hzl78" [46105c15-5dad-4559-a1d3-2bbbbb32a4a2] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009418272s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-481070 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-481070 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-481070 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-481070 -n no-preload-481070
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-481070 -n no-preload-481070: exit status 2 (273.917404ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-481070 -n no-preload-481070
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-481070 -n no-preload-481070: exit status 2 (270.883806ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-481070 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-481070 -n no-preload-481070
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-481070 -n no-preload-481070
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-gtbpf" [621ef0ed-cb1e-4825-a22b-67d673e9ea5d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010208953s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-742211 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-742211 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-742211 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-742211 -n embed-certs-742211
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-742211 -n embed-certs-742211: exit status 2 (271.886064ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-742211 -n embed-certs-742211
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-742211 -n embed-certs-742211: exit status 2 (270.840534ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-742211 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-742211 -n embed-certs-742211
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-742211 -n embed-certs-742211
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.54s)

                                                
                                    

Test skip (27/304)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.27.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.1/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.1/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:522: 
----------------------- debugLogs start: kubenet-872088 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-872088

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-872088

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-872088

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-872088

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-872088

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-872088

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-872088

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-872088

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-872088

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-872088

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-872088"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-872088"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-872088"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-872088

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-872088"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-872088"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-872088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-872088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-872088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-872088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-872088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-872088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-872088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-872088" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-872088"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-872088"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-872088"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-872088"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-872088"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-872088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-872088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-872088" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-872088"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-872088"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-872088"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-872088"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-872088"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17102-5717/.minikube/ca.crt
server: https://127.0.0.1:32926
name: missing-upgrade-586789
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17102-5717/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 21 Aug 2023 11:04:25 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: offline-crio-575709
contexts:
- context:
cluster: missing-upgrade-586789
user: missing-upgrade-586789
name: missing-upgrade-586789
- context:
cluster: offline-crio-575709
extensions:
- extension:
last-update: Mon, 21 Aug 2023 11:04:25 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: offline-crio-575709
name: offline-crio-575709
current-context: offline-crio-575709
kind: Config
preferences: {}
users:
- name: missing-upgrade-586789
user:
client-certificate: /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/missing-upgrade-586789/client.crt
client-key: /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/missing-upgrade-586789/client.key
- name: offline-crio-575709
user:
client-certificate: /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/offline-crio-575709/client.crt
client-key: /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/offline-crio-575709/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-872088

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-872088"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-872088"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-872088"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-872088"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-872088"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-872088"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-872088"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-872088"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-872088"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-872088"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-872088"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-872088"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-872088"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-872088"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-872088"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-872088"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-872088"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-872088"

                                                
                                                
----------------------- debugLogs end: kubenet-872088 [took: 3.165325336s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-872088" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-872088
--- SKIP: TestNetworkPlugins/group/kubenet (3.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-872088 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-872088

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-872088

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-872088

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-872088

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-872088

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-872088

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-872088

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-872088

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-872088

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-872088

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-872088"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-872088"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-872088"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-872088

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-872088"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-872088"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-872088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-872088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-872088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-872088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-872088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-872088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-872088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-872088" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-872088"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-872088"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-872088"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-872088"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-872088"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-872088

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-872088

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-872088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-872088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-872088

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-872088

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-872088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-872088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-872088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-872088" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-872088" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-872088"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-872088"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-872088"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-872088"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-872088"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17102-5717/.minikube/ca.crt
server: https://127.0.0.1:32926
name: missing-upgrade-586789
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17102-5717/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 21 Aug 2023 11:04:25 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: offline-crio-575709
contexts:
- context:
cluster: missing-upgrade-586789
user: missing-upgrade-586789
name: missing-upgrade-586789
- context:
cluster: offline-crio-575709
extensions:
- extension:
last-update: Mon, 21 Aug 2023 11:04:25 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: offline-crio-575709
name: offline-crio-575709
current-context: offline-crio-575709
kind: Config
preferences: {}
users:
- name: missing-upgrade-586789
user:
client-certificate: /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/missing-upgrade-586789/client.crt
client-key: /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/missing-upgrade-586789/client.key
- name: offline-crio-575709
user:
client-certificate: /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/offline-crio-575709/client.crt
client-key: /home/jenkins/minikube-integration/17102-5717/.minikube/profiles/offline-crio-575709/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-872088

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-872088"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-872088"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-872088"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-872088"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-872088"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-872088"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-872088"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-872088"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-872088"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-872088"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-872088"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-872088"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-872088"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-872088"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-872088"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-872088"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-872088"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-872088" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-872088"

                                                
                                                
----------------------- debugLogs end: cilium-872088 [took: 3.771206093s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-872088" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-872088
--- SKIP: TestNetworkPlugins/group/cilium (3.93s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-204062" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-204062
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard