Test Report: Docker_Linux_crio 16890

                    
                      dc702cb3cbb2bfe371541339d66d19e451f60279:2023-07-17:30187
                    
                

Test fail (8/298)

x
+
TestAddons/parallel/Ingress (152.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-646610 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-646610 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-646610 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [1b1e3135-8ec7-46d0-a8da-4b27cc9c37a1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [1b1e3135-8ec7-46d0-a8da-4b27cc9c37a1] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.010506392s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p addons-646610 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-646610 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.681086941s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context addons-646610 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p addons-646610 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p addons-646610 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p addons-646610 addons disable ingress-dns --alsologtostderr -v=1: (1.327944755s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p addons-646610 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p addons-646610 addons disable ingress --alsologtostderr -v=1: (7.620094085s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-646610
helpers_test.go:235: (dbg) docker inspect addons-646610:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "06535e39775498d0893bc2f8f6b69829874bf1524803ed87dba1533c3b1653b7",
	        "Created": "2023-07-17T18:45:57.046957198Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 146381,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T18:45:57.347473855Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/06535e39775498d0893bc2f8f6b69829874bf1524803ed87dba1533c3b1653b7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/06535e39775498d0893bc2f8f6b69829874bf1524803ed87dba1533c3b1653b7/hostname",
	        "HostsPath": "/var/lib/docker/containers/06535e39775498d0893bc2f8f6b69829874bf1524803ed87dba1533c3b1653b7/hosts",
	        "LogPath": "/var/lib/docker/containers/06535e39775498d0893bc2f8f6b69829874bf1524803ed87dba1533c3b1653b7/06535e39775498d0893bc2f8f6b69829874bf1524803ed87dba1533c3b1653b7-json.log",
	        "Name": "/addons-646610",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-646610:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-646610",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ae0f4aab920f3a7db8cef66f8da6625fb0ba25f44f4508154e1c6ed88af07be0-init/diff:/var/lib/docker/overlay2/d8b40fcaabfbbb6eb20cfe7c35f752b4babaa96b29803507d5f63d9939e9e0f0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ae0f4aab920f3a7db8cef66f8da6625fb0ba25f44f4508154e1c6ed88af07be0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ae0f4aab920f3a7db8cef66f8da6625fb0ba25f44f4508154e1c6ed88af07be0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ae0f4aab920f3a7db8cef66f8da6625fb0ba25f44f4508154e1c6ed88af07be0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-646610",
	                "Source": "/var/lib/docker/volumes/addons-646610/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-646610",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-646610",
	                "name.minikube.sigs.k8s.io": "addons-646610",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "383f4c3780e0cb43b58f7d57cfacee4fee570cf966d4df6f09aeb003370a3bc8",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/383f4c3780e0",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-646610": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "06535e397754",
	                        "addons-646610"
	                    ],
	                    "NetworkID": "ec46d4d7ce96426b6043b52a2d16b21ecaaa457584d2fb3b64148633d9ab5149",
	                    "EndpointID": "cd0232d015ba1659e9a00718fddbfe7bf1bf1cab5a4acc25711cd69d840d7de5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-646610 -n addons-646610
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-646610 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-646610 logs -n 25: (1.171801321s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-884134   | jenkins | v1.30.1 | 17 Jul 23 18:45 UTC |                     |
	|         | -p download-only-884134        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-884134   | jenkins | v1.30.1 | 17 Jul 23 18:45 UTC |                     |
	|         | -p download-only-884134        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | --all                          | minikube               | jenkins | v1.30.1 | 17 Jul 23 18:45 UTC | 17 Jul 23 18:45 UTC |
	| delete  | -p download-only-884134        | download-only-884134   | jenkins | v1.30.1 | 17 Jul 23 18:45 UTC | 17 Jul 23 18:45 UTC |
	| delete  | -p download-only-884134        | download-only-884134   | jenkins | v1.30.1 | 17 Jul 23 18:45 UTC | 17 Jul 23 18:45 UTC |
	| start   | --download-only -p             | download-docker-134543 | jenkins | v1.30.1 | 17 Jul 23 18:45 UTC |                     |
	|         | download-docker-134543         |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p download-docker-134543      | download-docker-134543 | jenkins | v1.30.1 | 17 Jul 23 18:45 UTC | 17 Jul 23 18:45 UTC |
	| start   | --download-only -p             | binary-mirror-288495   | jenkins | v1.30.1 | 17 Jul 23 18:45 UTC |                     |
	|         | binary-mirror-288495           |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --binary-mirror                |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45155         |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-288495        | binary-mirror-288495   | jenkins | v1.30.1 | 17 Jul 23 18:45 UTC | 17 Jul 23 18:45 UTC |
	| start   | -p addons-646610               | addons-646610          | jenkins | v1.30.1 | 17 Jul 23 18:45 UTC | 17 Jul 23 18:47 UTC |
	|         | --wait=true --memory=4000      |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --addons=registry              |                        |         |         |                     |                     |
	|         | --addons=metrics-server        |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                        |         |         |                     |                     |
	|         | --addons=gcp-auth              |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --addons=ingress               |                        |         |         |                     |                     |
	|         | --addons=ingress-dns           |                        |         |         |                     |                     |
	|         | --addons=helm-tiller           |                        |         |         |                     |                     |
	| addons  | enable headlamp                | addons-646610          | jenkins | v1.30.1 | 17 Jul 23 18:47 UTC | 17 Jul 23 18:47 UTC |
	|         | -p addons-646610               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-646610          | jenkins | v1.30.1 | 17 Jul 23 18:47 UTC | 17 Jul 23 18:47 UTC |
	|         | addons-646610                  |                        |         |         |                     |                     |
	| addons  | addons-646610 addons           | addons-646610          | jenkins | v1.30.1 | 17 Jul 23 18:47 UTC | 17 Jul 23 18:47 UTC |
	|         | disable metrics-server         |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-646610          | jenkins | v1.30.1 | 17 Jul 23 18:47 UTC | 17 Jul 23 18:47 UTC |
	|         | addons-646610                  |                        |         |         |                     |                     |
	| ip      | addons-646610 ip               | addons-646610          | jenkins | v1.30.1 | 17 Jul 23 18:47 UTC | 17 Jul 23 18:47 UTC |
	| addons  | addons-646610 addons disable   | addons-646610          | jenkins | v1.30.1 | 17 Jul 23 18:47 UTC | 17 Jul 23 18:47 UTC |
	|         | registry --alsologtostderr     |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-646610 addons disable   | addons-646610          | jenkins | v1.30.1 | 17 Jul 23 18:47 UTC | 17 Jul 23 18:47 UTC |
	|         | helm-tiller --alsologtostderr  |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| ssh     | addons-646610 ssh curl -s      | addons-646610          | jenkins | v1.30.1 | 17 Jul 23 18:48 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:    |                        |         |         |                     |                     |
	|         | nginx.example.com'             |                        |         |         |                     |                     |
	| addons  | addons-646610 addons           | addons-646610          | jenkins | v1.30.1 | 17 Jul 23 18:48 UTC | 17 Jul 23 18:48 UTC |
	|         | disable csi-hostpath-driver    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | addons-646610 addons           | addons-646610          | jenkins | v1.30.1 | 17 Jul 23 18:48 UTC | 17 Jul 23 18:48 UTC |
	|         | disable volumesnapshots        |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ip      | addons-646610 ip               | addons-646610          | jenkins | v1.30.1 | 17 Jul 23 18:50 UTC | 17 Jul 23 18:50 UTC |
	| addons  | addons-646610 addons disable   | addons-646610          | jenkins | v1.30.1 | 17 Jul 23 18:50 UTC | 17 Jul 23 18:50 UTC |
	|         | ingress-dns --alsologtostderr  |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-646610 addons disable   | addons-646610          | jenkins | v1.30.1 | 17 Jul 23 18:50 UTC | 17 Jul 23 18:50 UTC |
	|         | ingress --alsologtostderr -v=1 |                        |         |         |                     |                     |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 18:45:33
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 18:45:33.721003  145708 out.go:296] Setting OutFile to fd 1 ...
	I0717 18:45:33.721117  145708 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 18:45:33.721125  145708 out.go:309] Setting ErrFile to fd 2...
	I0717 18:45:33.721129  145708 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 18:45:33.721333  145708 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-138069/.minikube/bin
	I0717 18:45:33.721924  145708 out.go:303] Setting JSON to false
	I0717 18:45:33.722781  145708 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":12485,"bootTime":1689607049,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:45:33.722846  145708 start.go:138] virtualization: kvm guest
	I0717 18:45:33.725382  145708 out.go:177] * [addons-646610] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 18:45:33.726923  145708 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 18:45:33.726971  145708 notify.go:220] Checking for updates...
	I0717 18:45:33.728388  145708 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:45:33.730021  145708 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-138069/kubeconfig
	I0717 18:45:33.731479  145708 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-138069/.minikube
	I0717 18:45:33.732785  145708 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 18:45:33.734096  145708 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 18:45:33.735665  145708 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 18:45:33.756504  145708 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 18:45:33.756621  145708 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 18:45:33.805575  145708 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:41 SystemTime:2023-07-17 18:45:33.797386233 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 18:45:33.805719  145708 docker.go:294] overlay module found
	I0717 18:45:33.807905  145708 out.go:177] * Using the docker driver based on user configuration
	I0717 18:45:33.809370  145708 start.go:298] selected driver: docker
	I0717 18:45:33.809382  145708 start.go:880] validating driver "docker" against <nil>
	I0717 18:45:33.809393  145708 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 18:45:33.810135  145708 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 18:45:33.858788  145708 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:41 SystemTime:2023-07-17 18:45:33.850901021 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 18:45:33.858939  145708 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 18:45:33.859128  145708 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:45:33.861088  145708 out.go:177] * Using Docker driver with root privileges
	I0717 18:45:33.862587  145708 cni.go:84] Creating CNI manager for ""
	I0717 18:45:33.862604  145708 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 18:45:33.862613  145708 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 18:45:33.862623  145708 start_flags.go:319] config:
	{Name:addons-646610 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-646610 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cn
i FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 18:45:33.864353  145708 out.go:177] * Starting control plane node addons-646610 in cluster addons-646610
	I0717 18:45:33.865663  145708 cache.go:122] Beginning downloading kic base image for docker with crio
	I0717 18:45:33.867019  145708 out.go:177] * Pulling base image ...
	I0717 18:45:33.868334  145708 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 18:45:33.868367  145708 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16890-138069/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0717 18:45:33.868376  145708 cache.go:57] Caching tarball of preloaded images
	I0717 18:45:33.868420  145708 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 18:45:33.868461  145708 preload.go:174] Found /home/jenkins/minikube-integration/16890-138069/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 18:45:33.868474  145708 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 18:45:33.868792  145708 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/config.json ...
	I0717 18:45:33.868818  145708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/config.json: {Name:mk53ca36f4f9bfa6eb6b7db588b87112ec702c7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:45:33.885152  145708 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0717 18:45:33.885288  145708 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0717 18:45:33.885311  145708 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory, skipping pull
	I0717 18:45:33.885320  145708 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in cache, skipping pull
	I0717 18:45:33.885327  145708 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	I0717 18:45:33.885334  145708 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 from local cache
	I0717 18:45:44.522310  145708 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 from cached tarball
	I0717 18:45:44.522354  145708 cache.go:195] Successfully downloaded all kic artifacts
	I0717 18:45:44.522397  145708 start.go:365] acquiring machines lock for addons-646610: {Name:mk4e561f4715a0f06371a1bda8da59179cf788a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:45:44.522531  145708 start.go:369] acquired machines lock for "addons-646610" in 111.923µs
	I0717 18:45:44.522557  145708 start.go:93] Provisioning new machine with config: &{Name:addons-646610 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-646610 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:45:44.522631  145708 start.go:125] createHost starting for "" (driver="docker")
	I0717 18:45:44.524639  145708 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0717 18:45:44.524853  145708 start.go:159] libmachine.API.Create for "addons-646610" (driver="docker")
	I0717 18:45:44.524886  145708 client.go:168] LocalClient.Create starting
	I0717 18:45:44.524970  145708 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem
	I0717 18:45:44.672937  145708 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem
	I0717 18:45:44.845181  145708 cli_runner.go:164] Run: docker network inspect addons-646610 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 18:45:44.860718  145708 cli_runner.go:211] docker network inspect addons-646610 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 18:45:44.860798  145708 network_create.go:281] running [docker network inspect addons-646610] to gather additional debugging logs...
	I0717 18:45:44.860826  145708 cli_runner.go:164] Run: docker network inspect addons-646610
	W0717 18:45:44.875538  145708 cli_runner.go:211] docker network inspect addons-646610 returned with exit code 1
	I0717 18:45:44.875566  145708 network_create.go:284] error running [docker network inspect addons-646610]: docker network inspect addons-646610: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-646610 not found
	I0717 18:45:44.875578  145708 network_create.go:286] output of [docker network inspect addons-646610]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-646610 not found
	
	** /stderr **
	I0717 18:45:44.875620  145708 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 18:45:44.890754  145708 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0012787a0}
	I0717 18:45:44.890795  145708 network_create.go:123] attempt to create docker network addons-646610 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0717 18:45:44.890855  145708 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-646610 addons-646610
	I0717 18:45:44.940843  145708 network_create.go:107] docker network addons-646610 192.168.49.0/24 created
	I0717 18:45:44.940875  145708 kic.go:117] calculated static IP "192.168.49.2" for the "addons-646610" container
	I0717 18:45:44.940934  145708 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 18:45:44.955498  145708 cli_runner.go:164] Run: docker volume create addons-646610 --label name.minikube.sigs.k8s.io=addons-646610 --label created_by.minikube.sigs.k8s.io=true
	I0717 18:45:44.973199  145708 oci.go:103] Successfully created a docker volume addons-646610
	I0717 18:45:44.973295  145708 cli_runner.go:164] Run: docker run --rm --name addons-646610-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-646610 --entrypoint /usr/bin/test -v addons-646610:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0717 18:45:52.134758  145708 cli_runner.go:217] Completed: docker run --rm --name addons-646610-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-646610 --entrypoint /usr/bin/test -v addons-646610:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib: (7.161376812s)
	I0717 18:45:52.134792  145708 oci.go:107] Successfully prepared a docker volume addons-646610
	I0717 18:45:52.134826  145708 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 18:45:52.134861  145708 kic.go:190] Starting extracting preloaded images to volume ...
	I0717 18:45:52.134940  145708 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16890-138069/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-646610:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0717 18:45:56.983157  145708 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16890-138069/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-646610:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.848161882s)
	I0717 18:45:56.983200  145708 kic.go:199] duration metric: took 4.848333 seconds to extract preloaded images to volume
	W0717 18:45:56.983350  145708 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0717 18:45:56.983468  145708 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 18:45:57.032927  145708 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-646610 --name addons-646610 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-646610 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-646610 --network addons-646610 --ip 192.168.49.2 --volume addons-646610:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 18:45:57.356719  145708 cli_runner.go:164] Run: docker container inspect addons-646610 --format={{.State.Running}}
	I0717 18:45:57.374054  145708 cli_runner.go:164] Run: docker container inspect addons-646610 --format={{.State.Status}}
	I0717 18:45:57.392201  145708 cli_runner.go:164] Run: docker exec addons-646610 stat /var/lib/dpkg/alternatives/iptables
	I0717 18:45:57.452556  145708 oci.go:144] the created container "addons-646610" has a running status.
	I0717 18:45:57.452602  145708 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16890-138069/.minikube/machines/addons-646610/id_rsa...
	I0717 18:45:57.753597  145708 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16890-138069/.minikube/machines/addons-646610/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 18:45:57.776879  145708 cli_runner.go:164] Run: docker container inspect addons-646610 --format={{.State.Status}}
	I0717 18:45:57.796109  145708 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 18:45:57.796135  145708 kic_runner.go:114] Args: [docker exec --privileged addons-646610 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 18:45:57.876738  145708 cli_runner.go:164] Run: docker container inspect addons-646610 --format={{.State.Status}}
	I0717 18:45:57.898093  145708 machine.go:88] provisioning docker machine ...
	I0717 18:45:57.898130  145708 ubuntu.go:169] provisioning hostname "addons-646610"
	I0717 18:45:57.898179  145708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-646610
	I0717 18:45:57.915123  145708 main.go:141] libmachine: Using SSH client type: native
	I0717 18:45:57.915751  145708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0717 18:45:57.915776  145708 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-646610 && echo "addons-646610" | sudo tee /etc/hostname
	I0717 18:45:58.095920  145708 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-646610
	
	I0717 18:45:58.096066  145708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-646610
	I0717 18:45:58.112369  145708 main.go:141] libmachine: Using SSH client type: native
	I0717 18:45:58.112776  145708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0717 18:45:58.112795  145708 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-646610' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-646610/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-646610' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:45:58.236010  145708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:45:58.236044  145708 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16890-138069/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-138069/.minikube}
	I0717 18:45:58.236075  145708 ubuntu.go:177] setting up certificates
	I0717 18:45:58.236087  145708 provision.go:83] configureAuth start
	I0717 18:45:58.236154  145708 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-646610
	I0717 18:45:58.252543  145708 provision.go:138] copyHostCerts
	I0717 18:45:58.252670  145708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-138069/.minikube/ca.pem (1078 bytes)
	I0717 18:45:58.252816  145708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-138069/.minikube/cert.pem (1123 bytes)
	I0717 18:45:58.252897  145708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-138069/.minikube/key.pem (1675 bytes)
	I0717 18:45:58.253001  145708 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca-key.pem org=jenkins.addons-646610 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-646610]
	I0717 18:45:58.365703  145708 provision.go:172] copyRemoteCerts
	I0717 18:45:58.365768  145708 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:45:58.365818  145708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-646610
	I0717 18:45:58.382181  145708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/addons-646610/id_rsa Username:docker}
	I0717 18:45:58.476680  145708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 18:45:58.497998  145708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 18:45:58.518453  145708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0717 18:45:58.538859  145708 provision.go:86] duration metric: configureAuth took 302.752988ms
	I0717 18:45:58.538889  145708 ubuntu.go:193] setting minikube options for container-runtime
	I0717 18:45:58.539112  145708 config.go:182] Loaded profile config "addons-646610": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 18:45:58.539245  145708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-646610
	I0717 18:45:58.555596  145708 main.go:141] libmachine: Using SSH client type: native
	I0717 18:45:58.556070  145708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0717 18:45:58.556091  145708 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:45:58.773281  145708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:45:58.773317  145708 machine.go:91] provisioned docker machine in 875.200321ms
	I0717 18:45:58.773329  145708 client.go:171] LocalClient.Create took 14.248434672s
	I0717 18:45:58.773349  145708 start.go:167] duration metric: libmachine.API.Create for "addons-646610" took 14.248495502s
	I0717 18:45:58.773360  145708 start.go:300] post-start starting for "addons-646610" (driver="docker")
	I0717 18:45:58.773373  145708 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:45:58.773452  145708 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:45:58.773505  145708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-646610
	I0717 18:45:58.789191  145708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/addons-646610/id_rsa Username:docker}
	I0717 18:45:58.880585  145708 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:45:58.883525  145708 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 18:45:58.883551  145708 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 18:45:58.883562  145708 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 18:45:58.883569  145708 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 18:45:58.883580  145708 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-138069/.minikube/addons for local assets ...
	I0717 18:45:58.883632  145708 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-138069/.minikube/files for local assets ...
	I0717 18:45:58.883652  145708 start.go:303] post-start completed in 110.285739ms
	I0717 18:45:58.883918  145708 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-646610
	I0717 18:45:58.899756  145708 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/config.json ...
	I0717 18:45:58.900024  145708 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:45:58.900076  145708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-646610
	I0717 18:45:58.914980  145708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/addons-646610/id_rsa Username:docker}
	I0717 18:45:59.000815  145708 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 18:45:59.004959  145708 start.go:128] duration metric: createHost completed in 14.482310452s
	I0717 18:45:59.004984  145708 start.go:83] releasing machines lock for "addons-646610", held for 14.482441746s
	I0717 18:45:59.005065  145708 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-646610
	I0717 18:45:59.020966  145708 ssh_runner.go:195] Run: cat /version.json
	I0717 18:45:59.021036  145708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-646610
	I0717 18:45:59.020966  145708 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:45:59.021162  145708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-646610
	I0717 18:45:59.036976  145708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/addons-646610/id_rsa Username:docker}
	I0717 18:45:59.037380  145708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/addons-646610/id_rsa Username:docker}
	W0717 18:45:59.216390  145708 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 18:45:59.216470  145708 ssh_runner.go:195] Run: systemctl --version
	I0717 18:45:59.220709  145708 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:45:59.356861  145708 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 18:45:59.360962  145708 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:45:59.377884  145708 cni.go:227] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0717 18:45:59.377970  145708 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:45:59.404200  145708 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0717 18:45:59.404226  145708 start.go:469] detecting cgroup driver to use...
	I0717 18:45:59.404262  145708 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 18:45:59.404306  145708 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:45:59.417891  145708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:45:59.427551  145708 docker.go:196] disabling cri-docker service (if available) ...
	I0717 18:45:59.427604  145708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:45:59.439404  145708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:45:59.451573  145708 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:45:59.532145  145708 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:45:59.612252  145708 docker.go:212] disabling docker service ...
	I0717 18:45:59.612318  145708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:45:59.629212  145708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:45:59.639590  145708 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:45:59.711746  145708 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:45:59.787928  145708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:45:59.798217  145708 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:45:59.812317  145708 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 18:45:59.812371  145708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:45:59.820833  145708 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:45:59.820896  145708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:45:59.829397  145708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:45:59.837597  145708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:45:59.846139  145708 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:45:59.853946  145708 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:45:59.861146  145708 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:45:59.868287  145708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:45:59.938702  145708 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:46:00.037847  145708 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:46:00.037939  145708 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:46:00.041448  145708 start.go:537] Will wait 60s for crictl version
	I0717 18:46:00.041507  145708 ssh_runner.go:195] Run: which crictl
	I0717 18:46:00.044668  145708 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:46:00.080656  145708 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0717 18:46:00.080759  145708 ssh_runner.go:195] Run: crio --version
	I0717 18:46:00.115138  145708 ssh_runner.go:195] Run: crio --version
	I0717 18:46:00.149545  145708 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.6 ...
	I0717 18:46:00.151400  145708 cli_runner.go:164] Run: docker network inspect addons-646610 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 18:46:00.167902  145708 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0717 18:46:00.171240  145708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:46:00.181147  145708 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 18:46:00.181205  145708 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:46:00.229331  145708 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 18:46:00.229354  145708 crio.go:415] Images already preloaded, skipping extraction
	I0717 18:46:00.229395  145708 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:46:00.262290  145708 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 18:46:00.262314  145708 cache_images.go:84] Images are preloaded, skipping loading
	I0717 18:46:00.262385  145708 ssh_runner.go:195] Run: crio config
	I0717 18:46:00.305053  145708 cni.go:84] Creating CNI manager for ""
	I0717 18:46:00.305080  145708 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 18:46:00.305105  145708 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 18:46:00.305131  145708 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-646610 NodeName:addons-646610 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 18:46:00.305337  145708 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-646610"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:46:00.305434  145708 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-646610 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:addons-646610 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 18:46:00.305501  145708 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 18:46:00.313634  145708 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:46:00.313700  145708 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:46:00.321272  145708 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0717 18:46:00.336606  145708 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 18:46:00.351604  145708 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0717 18:46:00.366702  145708 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0717 18:46:00.369689  145708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:46:00.378784  145708 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610 for IP: 192.168.49.2
	I0717 18:46:00.378818  145708 certs.go:190] acquiring lock for shared ca certs: {Name:mk42196ce59710ebf500640671660e2f4656c84e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:46:00.378958  145708 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/16890-138069/.minikube/ca.key
	I0717 18:46:00.477220  145708 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt ...
	I0717 18:46:00.477254  145708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt: {Name:mk936825d261580a19b2893023970b7a99e238ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:46:00.477444  145708 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-138069/.minikube/ca.key ...
	I0717 18:46:00.477462  145708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-138069/.minikube/ca.key: {Name:mkdb1084a4f788852794ff27b219ddf12578139e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:46:00.477564  145708 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/16890-138069/.minikube/proxy-client-ca.key
	I0717 18:46:00.551460  145708 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-138069/.minikube/proxy-client-ca.crt ...
	I0717 18:46:00.551492  145708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-138069/.minikube/proxy-client-ca.crt: {Name:mkd54b99b34110b1030b6d4c865ed8ce1720ad9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:46:00.551684  145708 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-138069/.minikube/proxy-client-ca.key ...
	I0717 18:46:00.551700  145708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-138069/.minikube/proxy-client-ca.key: {Name:mk50b3451e781e42a271ac7080667f6da2e48aca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:46:00.552012  145708 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/client.key
	I0717 18:46:00.552041  145708 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/client.crt with IP's: []
	I0717 18:46:00.677163  145708 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/client.crt ...
	I0717 18:46:00.677200  145708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/client.crt: {Name:mk14d22291818dfd96c5ba06ba43c2fdd0f6a66c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:46:00.677404  145708 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/client.key ...
	I0717 18:46:00.677422  145708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/client.key: {Name:mk7128a06e6a1ad30c94646af961acebd38ba03d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:46:00.677516  145708 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/apiserver.key.dd3b5fb2
	I0717 18:46:00.677540  145708 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0717 18:46:00.831345  145708 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/apiserver.crt.dd3b5fb2 ...
	I0717 18:46:00.831380  145708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/apiserver.crt.dd3b5fb2: {Name:mk4334636d64c67f52fb6482047be4d293d15968 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:46:00.831576  145708 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/apiserver.key.dd3b5fb2 ...
	I0717 18:46:00.831595  145708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/apiserver.key.dd3b5fb2: {Name:mke7f8ac866c7d7d5bb2abf3809614d0b4782b74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:46:00.831694  145708 certs.go:337] copying /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/apiserver.crt
	I0717 18:46:00.831823  145708 certs.go:341] copying /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/apiserver.key
	I0717 18:46:00.831886  145708 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/proxy-client.key
	I0717 18:46:00.831908  145708 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/proxy-client.crt with IP's: []
	I0717 18:46:00.903486  145708 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/proxy-client.crt ...
	I0717 18:46:00.903519  145708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/proxy-client.crt: {Name:mk5976ddd9b8b787f16981caae64ed4f3f64b0a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:46:00.903696  145708 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/proxy-client.key ...
	I0717 18:46:00.903709  145708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/proxy-client.key: {Name:mkc89dc16e06ee468b3567547c1dea1bcdb0f21c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:46:00.903872  145708 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 18:46:00.903911  145708 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem (1078 bytes)
	I0717 18:46:00.903936  145708 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:46:00.903960  145708 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/key.pem (1675 bytes)
	I0717 18:46:00.904660  145708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 18:46:00.926163  145708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 18:46:00.947637  145708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:46:00.968144  145708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 18:46:00.988795  145708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:46:01.008533  145708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 18:46:01.028626  145708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:46:01.049032  145708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 18:46:01.070145  145708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:46:01.090874  145708 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:46:01.105731  145708 ssh_runner.go:195] Run: openssl version
	I0717 18:46:01.110578  145708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:46:01.118964  145708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:46:01.121971  145708 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:46 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:46:01.122024  145708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:46:01.128050  145708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:46:01.136152  145708 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 18:46:01.138876  145708 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 18:46:01.138917  145708 kubeadm.go:404] StartCluster: {Name:addons-646610 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-646610 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 18:46:01.139016  145708 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:46:01.139066  145708 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:46:01.171271  145708 cri.go:89] found id: ""
	I0717 18:46:01.171326  145708 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 18:46:01.179622  145708 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:46:01.187320  145708 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0717 18:46:01.187372  145708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:46:01.194873  145708 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:46:01.194933  145708 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 18:46:01.237203  145708 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0717 18:46:01.237282  145708 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 18:46:01.273006  145708 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0717 18:46:01.273128  145708 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1037-gcp
	I0717 18:46:01.273195  145708 kubeadm.go:322] OS: Linux
	I0717 18:46:01.273264  145708 kubeadm.go:322] CGROUPS_CPU: enabled
	I0717 18:46:01.273343  145708 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0717 18:46:01.273416  145708 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0717 18:46:01.273463  145708 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0717 18:46:01.273503  145708 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0717 18:46:01.273552  145708 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0717 18:46:01.273600  145708 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0717 18:46:01.273682  145708 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0717 18:46:01.273756  145708 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0717 18:46:01.334562  145708 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:46:01.334711  145708 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:46:01.334819  145708 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:46:01.524524  145708 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:46:01.528853  145708 out.go:204]   - Generating certificates and keys ...
	I0717 18:46:01.528979  145708 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 18:46:01.529069  145708 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 18:46:01.678706  145708 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 18:46:01.862007  145708 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0717 18:46:02.016177  145708 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0717 18:46:02.126629  145708 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0717 18:46:02.429413  145708 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0717 18:46:02.429597  145708 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-646610 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 18:46:02.511412  145708 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0717 18:46:02.511565  145708 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-646610 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 18:46:02.696171  145708 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 18:46:02.844959  145708 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 18:46:03.032639  145708 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0717 18:46:03.032844  145708 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:46:03.114686  145708 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:46:03.164754  145708 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:46:03.274031  145708 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:46:03.321345  145708 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:46:03.329370  145708 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:46:03.330177  145708 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:46:03.330227  145708 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 18:46:03.404109  145708 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:46:03.406566  145708 out.go:204]   - Booting up control plane ...
	I0717 18:46:03.406748  145708 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:46:03.406867  145708 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:46:03.407754  145708 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:46:03.409188  145708 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:46:03.411808  145708 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 18:46:08.413558  145708 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.001699 seconds
	I0717 18:46:08.413685  145708 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 18:46:08.425044  145708 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 18:46:08.943078  145708 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 18:46:08.943345  145708 kubeadm.go:322] [mark-control-plane] Marking the node addons-646610 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 18:46:09.453334  145708 kubeadm.go:322] [bootstrap-token] Using token: 45lce3.534dhjcoqlq6cd4o
	I0717 18:46:09.455134  145708 out.go:204]   - Configuring RBAC rules ...
	I0717 18:46:09.455287  145708 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 18:46:09.459127  145708 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 18:46:09.468716  145708 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 18:46:09.472099  145708 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 18:46:09.474984  145708 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 18:46:09.478350  145708 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 18:46:09.492317  145708 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 18:46:09.715455  145708 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 18:46:09.867916  145708 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 18:46:09.869173  145708 kubeadm.go:322] 
	I0717 18:46:09.869269  145708 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 18:46:09.869277  145708 kubeadm.go:322] 
	I0717 18:46:09.869376  145708 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 18:46:09.869381  145708 kubeadm.go:322] 
	I0717 18:46:09.869413  145708 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 18:46:09.869486  145708 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 18:46:09.869550  145708 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 18:46:09.869556  145708 kubeadm.go:322] 
	I0717 18:46:09.869626  145708 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0717 18:46:09.869632  145708 kubeadm.go:322] 
	I0717 18:46:09.869698  145708 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 18:46:09.869704  145708 kubeadm.go:322] 
	I0717 18:46:09.869768  145708 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 18:46:09.869863  145708 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 18:46:09.869948  145708 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 18:46:09.869954  145708 kubeadm.go:322] 
	I0717 18:46:09.870057  145708 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 18:46:09.870157  145708 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 18:46:09.870164  145708 kubeadm.go:322] 
	I0717 18:46:09.870272  145708 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 45lce3.534dhjcoqlq6cd4o \
	I0717 18:46:09.870401  145708 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:937c4239101ec8b12459e4fa3de0759350fbf81fa4f52752b966f06f42d7d7ec \
	I0717 18:46:09.870427  145708 kubeadm.go:322] 	--control-plane 
	I0717 18:46:09.870434  145708 kubeadm.go:322] 
	I0717 18:46:09.870542  145708 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 18:46:09.870548  145708 kubeadm.go:322] 
	I0717 18:46:09.870644  145708 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 45lce3.534dhjcoqlq6cd4o \
	I0717 18:46:09.870774  145708 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:937c4239101ec8b12459e4fa3de0759350fbf81fa4f52752b966f06f42d7d7ec 
	I0717 18:46:09.873357  145708 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1037-gcp\n", err: exit status 1
	I0717 18:46:09.873611  145708 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:46:09.873672  145708 cni.go:84] Creating CNI manager for ""
	I0717 18:46:09.873690  145708 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 18:46:09.876944  145708 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 18:46:09.879106  145708 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 18:46:09.884273  145708 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0717 18:46:09.884300  145708 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 18:46:09.965675  145708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 18:46:10.743550  145708 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 18:46:10.743650  145708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:10.743651  145708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5 minikube.k8s.io/name=addons-646610 minikube.k8s.io/updated_at=2023_07_17T18_46_10_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:10.821441  145708 ops.go:34] apiserver oom_adj: -16
	I0717 18:46:10.821593  145708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:11.412552  145708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:11.912456  145708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:12.413008  145708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:12.912161  145708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:13.412044  145708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:13.912882  145708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:14.412832  145708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:14.912510  145708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:15.412222  145708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:15.912157  145708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:16.412149  145708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:16.912131  145708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:17.412838  145708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:17.912581  145708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:18.412600  145708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:18.912697  145708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:19.412719  145708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:19.912891  145708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:20.412307  145708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:20.912926  145708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:21.411914  145708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:21.912088  145708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:22.411957  145708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:22.912119  145708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:46:23.171801  145708 kubeadm.go:1081] duration metric: took 12.428237494s to wait for elevateKubeSystemPrivileges.
	I0717 18:46:23.171844  145708 kubeadm.go:406] StartCluster complete in 22.032930346s
	I0717 18:46:23.171868  145708 settings.go:142] acquiring lock: {Name:mk9765434b8f4871dd605367f6caa71617d51b6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:46:23.172036  145708 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16890-138069/kubeconfig
	I0717 18:46:23.172600  145708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-138069/kubeconfig: {Name:mkc53c034e0e90a78d013670a58d5882070a3e3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:46:23.173529  145708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 18:46:23.173730  145708 config.go:182] Loaded profile config "addons-646610": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 18:46:23.173721  145708 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0717 18:46:23.173839  145708 addons.go:69] Setting volumesnapshots=true in profile "addons-646610"
	I0717 18:46:23.173857  145708 addons.go:231] Setting addon volumesnapshots=true in "addons-646610"
	I0717 18:46:23.173901  145708 host.go:66] Checking if "addons-646610" exists ...
	I0717 18:46:23.173990  145708 addons.go:69] Setting ingress=true in profile "addons-646610"
	I0717 18:46:23.174016  145708 addons.go:231] Setting addon ingress=true in "addons-646610"
	I0717 18:46:23.174077  145708 host.go:66] Checking if "addons-646610" exists ...
	I0717 18:46:23.174264  145708 cli_runner.go:164] Run: docker container inspect addons-646610 --format={{.State.Status}}
	I0717 18:46:23.174556  145708 cli_runner.go:164] Run: docker container inspect addons-646610 --format={{.State.Status}}
	I0717 18:46:23.174907  145708 addons.go:69] Setting ingress-dns=true in profile "addons-646610"
	I0717 18:46:23.174963  145708 addons.go:231] Setting addon ingress-dns=true in "addons-646610"
	I0717 18:46:23.175043  145708 addons.go:69] Setting cloud-spanner=true in profile "addons-646610"
	I0717 18:46:23.175056  145708 addons.go:231] Setting addon cloud-spanner=true in "addons-646610"
	I0717 18:46:23.175085  145708 host.go:66] Checking if "addons-646610" exists ...
	I0717 18:46:23.175490  145708 cli_runner.go:164] Run: docker container inspect addons-646610 --format={{.State.Status}}
	I0717 18:46:23.175629  145708 addons.go:69] Setting gcp-auth=true in profile "addons-646610"
	I0717 18:46:23.175655  145708 mustload.go:65] Loading cluster: addons-646610
	I0717 18:46:23.175663  145708 host.go:66] Checking if "addons-646610" exists ...
	I0717 18:46:23.175850  145708 config.go:182] Loaded profile config "addons-646610": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 18:46:23.176022  145708 addons.go:69] Setting registry=true in profile "addons-646610"
	I0717 18:46:23.176038  145708 addons.go:231] Setting addon registry=true in "addons-646610"
	I0717 18:46:23.176059  145708 addons.go:69] Setting helm-tiller=true in profile "addons-646610"
	I0717 18:46:23.176085  145708 addons.go:231] Setting addon helm-tiller=true in "addons-646610"
	I0717 18:46:23.176097  145708 addons.go:69] Setting inspektor-gadget=true in profile "addons-646610"
	I0717 18:46:23.176107  145708 addons.go:231] Setting addon inspektor-gadget=true in "addons-646610"
	I0717 18:46:23.176132  145708 host.go:66] Checking if "addons-646610" exists ...
	I0717 18:46:23.176136  145708 host.go:66] Checking if "addons-646610" exists ...
	I0717 18:46:23.176234  145708 cli_runner.go:164] Run: docker container inspect addons-646610 --format={{.State.Status}}
	I0717 18:46:23.176346  145708 addons.go:69] Setting storage-provisioner=true in profile "addons-646610"
	I0717 18:46:23.176359  145708 addons.go:231] Setting addon storage-provisioner=true in "addons-646610"
	I0717 18:46:23.176404  145708 host.go:66] Checking if "addons-646610" exists ...
	I0717 18:46:23.176558  145708 cli_runner.go:164] Run: docker container inspect addons-646610 --format={{.State.Status}}
	I0717 18:46:23.176585  145708 cli_runner.go:164] Run: docker container inspect addons-646610 --format={{.State.Status}}
	I0717 18:46:23.176648  145708 addons.go:69] Setting metrics-server=true in profile "addons-646610"
	I0717 18:46:23.176661  145708 addons.go:231] Setting addon metrics-server=true in "addons-646610"
	I0717 18:46:23.176085  145708 host.go:66] Checking if "addons-646610" exists ...
	I0717 18:46:23.176690  145708 host.go:66] Checking if "addons-646610" exists ...
	I0717 18:46:23.176915  145708 cli_runner.go:164] Run: docker container inspect addons-646610 --format={{.State.Status}}
	I0717 18:46:23.177063  145708 cli_runner.go:164] Run: docker container inspect addons-646610 --format={{.State.Status}}
	I0717 18:46:23.177071  145708 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-646610"
	I0717 18:46:23.177102  145708 cli_runner.go:164] Run: docker container inspect addons-646610 --format={{.State.Status}}
	I0717 18:46:23.177123  145708 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-646610"
	I0717 18:46:23.177154  145708 addons.go:69] Setting default-storageclass=true in profile "addons-646610"
	I0717 18:46:23.177160  145708 host.go:66] Checking if "addons-646610" exists ...
	I0717 18:46:23.177169  145708 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-646610"
	I0717 18:46:23.177383  145708 cli_runner.go:164] Run: docker container inspect addons-646610 --format={{.State.Status}}
	I0717 18:46:23.177433  145708 cli_runner.go:164] Run: docker container inspect addons-646610 --format={{.State.Status}}
	I0717 18:46:23.177648  145708 cli_runner.go:164] Run: docker container inspect addons-646610 --format={{.State.Status}}
	I0717 18:46:23.210052  145708 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0717 18:46:23.211713  145708 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0717 18:46:23.211735  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0717 18:46:23.211805  145708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-646610
	I0717 18:46:23.225599  145708 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.18.1
	I0717 18:46:23.227199  145708 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0717 18:46:23.227233  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0717 18:46:23.227308  145708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-646610
	I0717 18:46:23.233341  145708 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:46:23.235080  145708 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:46:23.236581  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 18:46:23.236671  145708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-646610
	I0717 18:46:23.249111  145708 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0717 18:46:23.250940  145708 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0717 18:46:23.250966  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0717 18:46:23.251043  145708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-646610
	I0717 18:46:23.252921  145708 out.go:177]   - Using image docker.io/registry:2.8.1
	I0717 18:46:23.254679  145708 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0717 18:46:23.256690  145708 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I0717 18:46:23.256710  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0717 18:46:23.256853  145708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-646610
	I0717 18:46:23.259088  145708 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.7
	I0717 18:46:23.260899  145708 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0717 18:46:23.260921  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0717 18:46:23.260984  145708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-646610
	I0717 18:46:23.268747  145708 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.1
	I0717 18:46:23.270576  145708 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0717 18:46:23.272274  145708 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0717 18:46:23.272255  145708 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0717 18:46:23.274824  145708 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0717 18:46:23.275058  145708 host.go:66] Checking if "addons-646610" exists ...
	I0717 18:46:23.277486  145708 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0717 18:46:23.279007  145708 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0717 18:46:23.281218  145708 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.3
	I0717 18:46:23.283387  145708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/addons-646610/id_rsa Username:docker}
	I0717 18:46:23.284414  145708 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 18:46:23.284568  145708 addons.go:231] Setting addon default-storageclass=true in "addons-646610"
	I0717 18:46:23.284706  145708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/addons-646610/id_rsa Username:docker}
	I0717 18:46:23.285798  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0717 18:46:23.288743  145708 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 18:46:23.288775  145708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/addons-646610/id_rsa Username:docker}
	I0717 18:46:23.289807  145708 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 18:46:23.290792  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 18:46:23.290856  145708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-646610
	I0717 18:46:23.290950  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0717 18:46:23.290999  145708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-646610
	I0717 18:46:23.291087  145708 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0717 18:46:23.291746  145708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/addons-646610/id_rsa Username:docker}
	I0717 18:46:23.290590  145708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-646610
	I0717 18:46:23.289846  145708 host.go:66] Checking if "addons-646610" exists ...
	I0717 18:46:23.294185  145708 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0717 18:46:23.301410  145708 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0717 18:46:23.300613  145708 cli_runner.go:164] Run: docker container inspect addons-646610 --format={{.State.Status}}
	I0717 18:46:23.304542  145708 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0717 18:46:23.307830  145708 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0717 18:46:23.309857  145708 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0717 18:46:23.309884  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0717 18:46:23.309951  145708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-646610
	I0717 18:46:23.310417  145708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/addons-646610/id_rsa Username:docker}
	I0717 18:46:23.322945  145708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/addons-646610/id_rsa Username:docker}
	I0717 18:46:23.334205  145708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/addons-646610/id_rsa Username:docker}
	I0717 18:46:23.336293  145708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/addons-646610/id_rsa Username:docker}
	I0717 18:46:23.341314  145708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/addons-646610/id_rsa Username:docker}
	I0717 18:46:23.341322  145708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/addons-646610/id_rsa Username:docker}
	I0717 18:46:23.345271  145708 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 18:46:23.345302  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 18:46:23.345344  145708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-646610
	I0717 18:46:23.361102  145708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/addons-646610/id_rsa Username:docker}
	I0717 18:46:23.473261  145708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 18:46:23.664251  145708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:46:23.664314  145708 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 18:46:23.664333  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0717 18:46:23.669146  145708 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0717 18:46:23.669175  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0717 18:46:23.672788  145708 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0717 18:46:23.672817  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0717 18:46:23.673929  145708 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0717 18:46:23.673952  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0717 18:46:23.679094  145708 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0717 18:46:23.679121  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0717 18:46:23.763139  145708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0717 18:46:23.763488  145708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 18:46:23.773794  145708 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-646610" context rescaled to 1 replicas
	I0717 18:46:23.773852  145708 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:46:23.776049  145708 out.go:177] * Verifying Kubernetes components...
	I0717 18:46:23.778162  145708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:46:23.781862  145708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 18:46:23.864905  145708 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0717 18:46:23.864943  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0717 18:46:23.869326  145708 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0717 18:46:23.869418  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0717 18:46:23.876561  145708 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0717 18:46:23.876654  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0717 18:46:23.882201  145708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 18:46:23.887606  145708 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 18:46:23.887636  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 18:46:23.963933  145708 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0717 18:46:23.963988  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0717 18:46:23.966397  145708 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I0717 18:46:23.966424  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0717 18:46:24.079965  145708 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0717 18:46:24.080008  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0717 18:46:24.174261  145708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0717 18:46:24.178900  145708 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0717 18:46:24.178981  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0717 18:46:24.279876  145708 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0717 18:46:24.279999  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0717 18:46:24.280742  145708 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:46:24.280818  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 18:46:24.281661  145708 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0717 18:46:24.281726  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0717 18:46:24.471574  145708 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0717 18:46:24.471605  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0717 18:46:24.471719  145708 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0717 18:46:24.471735  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0717 18:46:24.564111  145708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0717 18:46:24.583063  145708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 18:46:24.764716  145708 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0717 18:46:24.764750  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0717 18:46:24.764879  145708 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0717 18:46:24.764894  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0717 18:46:24.773629  145708 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0717 18:46:24.773662  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0717 18:46:24.974302  145708 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 18:46:24.974334  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0717 18:46:25.063240  145708 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0717 18:46:25.063280  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0717 18:46:25.265093  145708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 18:46:25.363481  145708 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0717 18:46:25.363523  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0717 18:46:25.462993  145708 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 18:46:25.463071  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0717 18:46:25.763438  145708 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.289888325s)
	I0717 18:46:25.763493  145708 start.go:917] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0717 18:46:25.964496  145708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0717 18:46:26.073761  145708 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0717 18:46:26.073794  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0717 18:46:26.471278  145708 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0717 18:46:26.471308  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0717 18:46:26.777024  145708 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0717 18:46:26.777117  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0717 18:46:27.065398  145708 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 18:46:27.065482  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0717 18:46:27.175705  145708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0717 18:46:28.572581  145708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.908277958s)
	I0717 18:46:29.576475  145708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.812928078s)
	I0717 18:46:29.576523  145708 addons.go:467] Verifying addon ingress=true in "addons-646610"
	I0717 18:46:29.576548  145708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.813367875s)
	I0717 18:46:29.578748  145708 out.go:177] * Verifying ingress addon...
	I0717 18:46:29.576612  145708 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (5.798415417s)
	I0717 18:46:29.576644  145708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.794751954s)
	I0717 18:46:29.576698  145708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.694453726s)
	I0717 18:46:29.576750  145708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.402435498s)
	I0717 18:46:29.576785  145708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.012638143s)
	I0717 18:46:29.576850  145708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.993756303s)
	I0717 18:46:29.576921  145708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.311744762s)
	I0717 18:46:29.576981  145708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.612451669s)
	I0717 18:46:29.581300  145708 addons.go:467] Verifying addon metrics-server=true in "addons-646610"
	I0717 18:46:29.581313  145708 addons.go:467] Verifying addon registry=true in "addons-646610"
	W0717 18:46:29.581357  145708 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 18:46:29.581383  145708 retry.go:31] will retry after 196.903894ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0717 18:46:29.583315  145708 out.go:177] * Verifying registry addon...
	I0717 18:46:29.582234  145708 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0717 18:46:29.582234  145708 node_ready.go:35] waiting up to 6m0s for node "addons-646610" to be "Ready" ...
	I0717 18:46:29.585983  145708 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0717 18:46:29.590629  145708 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0717 18:46:29.590652  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:29.591454  145708 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0717 18:46:29.591475  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:29.779169  145708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0717 18:46:30.090063  145708 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0717 18:46:30.090143  145708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-646610
	I0717 18:46:30.108878  145708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/addons-646610/id_rsa Username:docker}
	I0717 18:46:30.166683  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:30.167010  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:30.468461  145708 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0717 18:46:30.662138  145708 addons.go:231] Setting addon gcp-auth=true in "addons-646610"
	I0717 18:46:30.662226  145708 host.go:66] Checking if "addons-646610" exists ...
	I0717 18:46:30.662754  145708 cli_runner.go:164] Run: docker container inspect addons-646610 --format={{.State.Status}}
	I0717 18:46:30.681483  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:30.681740  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:30.696074  145708 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0717 18:46:30.696139  145708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-646610
	I0717 18:46:30.712479  145708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/addons-646610/id_rsa Username:docker}
	I0717 18:46:31.179129  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:31.270670  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:31.671480  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:31.672626  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:31.673259  145708 node_ready.go:58] node "addons-646610" has status "Ready":"False"
	I0717 18:46:32.176435  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:32.178540  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:32.669353  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:32.673759  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:33.171705  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:33.173064  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:33.570255  145708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.394436368s)
	I0717 18:46:33.570390  145708 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-646610"
	I0717 18:46:33.572960  145708 out.go:177] * Verifying csi-hostpath-driver addon...
	I0717 18:46:33.576851  145708 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0717 18:46:33.664569  145708 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0717 18:46:33.664603  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:33.666465  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:33.669934  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:34.167840  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:34.168991  145708 node_ready.go:58] node "addons-646610" has status "Ready":"False"
	I0717 18:46:34.173557  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:34.174290  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:34.263964  145708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.484722781s)
	I0717 18:46:34.264156  145708 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.568035439s)
	I0717 18:46:34.266429  145708 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0717 18:46:34.268200  145708 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0717 18:46:34.269760  145708 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0717 18:46:34.269785  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0717 18:46:34.363803  145708 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0717 18:46:34.363894  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0717 18:46:34.385816  145708 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 18:46:34.385907  145708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0717 18:46:34.488625  145708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0717 18:46:34.668819  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:34.673737  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:34.680306  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:35.165698  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:35.165771  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:35.169513  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:35.595880  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:35.596213  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:35.670451  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:36.095510  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:36.096761  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:36.170489  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:36.592817  145708 node_ready.go:58] node "addons-646610" has status "Ready":"False"
	I0717 18:46:36.594994  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:36.596658  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:36.670302  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:37.097128  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:37.097407  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:37.170514  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:37.569896  145708 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (3.081164216s)
	I0717 18:46:37.571478  145708 addons.go:467] Verifying addon gcp-auth=true in "addons-646610"
	I0717 18:46:37.574833  145708 out.go:177] * Verifying gcp-auth addon...
	I0717 18:46:37.577653  145708 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0717 18:46:37.580392  145708 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0717 18:46:37.580416  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:37.595337  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:37.595478  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:37.669961  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:38.084124  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:38.094659  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:38.095232  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:38.169409  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:38.584153  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:38.594988  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:38.595314  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:38.669561  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:39.083768  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:39.091800  145708 node_ready.go:58] node "addons-646610" has status "Ready":"False"
	I0717 18:46:39.095027  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:39.095164  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:39.169701  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:39.583842  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:39.594212  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:39.595096  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:39.669323  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:40.084584  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:40.095492  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:40.095922  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:40.168918  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:40.584082  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:40.594989  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:40.595289  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:40.670125  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:41.084227  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:41.094589  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:41.094590  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:41.169865  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:41.583941  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:41.591503  145708 node_ready.go:58] node "addons-646610" has status "Ready":"False"
	I0717 18:46:41.594386  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:41.594528  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:41.669251  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:42.084259  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:42.094528  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:42.095195  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:42.169429  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:42.584046  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:42.593998  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:42.594758  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:42.670116  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:43.084259  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:43.094445  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:43.094589  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:43.169646  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:43.583738  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:43.594265  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:43.595020  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:43.669127  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:44.083453  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:44.091797  145708 node_ready.go:58] node "addons-646610" has status "Ready":"False"
	I0717 18:46:44.094792  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:44.094904  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:44.168958  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:44.583762  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:44.595113  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:44.595254  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:44.669768  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:45.083547  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:45.094833  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:45.094887  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:45.168579  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:45.583871  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:45.594958  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:45.595525  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:45.669644  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:46.083766  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:46.094085  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:46.094790  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:46.168697  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:46.583759  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:46.592027  145708 node_ready.go:58] node "addons-646610" has status "Ready":"False"
	I0717 18:46:46.595069  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:46.595159  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:46.669041  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:47.087460  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:47.094489  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:47.094803  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:47.169895  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:47.584003  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:47.594399  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:47.594876  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:47.668534  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:48.083386  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:48.094314  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:48.094678  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:48.169480  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:48.585088  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:48.593884  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:48.595004  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:48.669144  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:49.084461  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:49.091840  145708 node_ready.go:58] node "addons-646610" has status "Ready":"False"
	I0717 18:46:49.094754  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:49.094828  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:49.168393  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:49.585201  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:49.594633  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:49.595347  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:49.668760  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:50.083784  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:50.093873  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:50.094620  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:50.169414  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:50.584724  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:50.594908  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:50.595077  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:50.668829  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:51.083907  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:51.093972  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:51.094702  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:51.168808  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:51.583990  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:51.591185  145708 node_ready.go:58] node "addons-646610" has status "Ready":"False"
	I0717 18:46:51.593692  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:51.595859  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:51.668543  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:52.083523  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:52.094893  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:52.095032  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:52.169142  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:52.584098  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:52.594243  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:52.594830  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:52.668476  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:53.083549  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:53.094758  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:53.095038  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:53.169673  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:53.583933  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:53.594069  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:53.595396  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:53.669488  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:54.083868  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:54.091185  145708 node_ready.go:58] node "addons-646610" has status "Ready":"False"
	I0717 18:46:54.094065  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:54.095089  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:54.169079  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:54.584553  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:54.595177  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:54.595322  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:54.669149  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:55.084462  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:55.094430  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:55.095269  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:55.169209  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:55.584237  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:55.594993  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:55.595037  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:55.669372  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:56.090676  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:56.185146  145708 node_ready.go:49] node "addons-646610" has status "Ready":"True"
	I0717 18:46:56.185184  145708 node_ready.go:38] duration metric: took 26.600036266s waiting for node "addons-646610" to be "Ready" ...
	I0717 18:46:56.185196  145708 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:46:56.186203  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:56.187215  145708 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0717 18:46:56.187232  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:56.187577  145708 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0717 18:46:56.187590  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:56.278596  145708 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-8sfsd" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:56.588695  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:56.665457  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:56.665819  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:56.671526  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:57.084054  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:57.096488  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:57.096624  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:57.169982  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:57.584942  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:57.596406  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:57.596724  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:57.670720  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:57.793352  145708 pod_ready.go:92] pod "coredns-5d78c9869d-8sfsd" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:57.793378  145708 pod_ready.go:81] duration metric: took 1.514742954s waiting for pod "coredns-5d78c9869d-8sfsd" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:57.793405  145708 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-646610" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:57.798641  145708 pod_ready.go:92] pod "etcd-addons-646610" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:57.798668  145708 pod_ready.go:81] duration metric: took 5.254796ms waiting for pod "etcd-addons-646610" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:57.798681  145708 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-646610" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:57.804047  145708 pod_ready.go:92] pod "kube-apiserver-addons-646610" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:57.804068  145708 pod_ready.go:81] duration metric: took 5.380788ms waiting for pod "kube-apiserver-addons-646610" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:57.804077  145708 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-646610" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:57.809028  145708 pod_ready.go:92] pod "kube-controller-manager-addons-646610" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:57.809049  145708 pod_ready.go:81] duration metric: took 4.966316ms waiting for pod "kube-controller-manager-addons-646610" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:57.809059  145708 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rh6wx" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:57.813729  145708 pod_ready.go:92] pod "kube-proxy-rh6wx" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:57.813749  145708 pod_ready.go:81] duration metric: took 4.682766ms waiting for pod "kube-proxy-rh6wx" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:57.813757  145708 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-646610" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:58.084414  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:58.095149  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:58.096480  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:58.171859  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:58.190016  145708 pod_ready.go:92] pod "kube-scheduler-addons-646610" in "kube-system" namespace has status "Ready":"True"
	I0717 18:46:58.190040  145708 pod_ready.go:81] duration metric: took 376.277846ms waiting for pod "kube-scheduler-addons-646610" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:58.190052  145708 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-844d8db974-k86gh" in "kube-system" namespace to be "Ready" ...
	I0717 18:46:58.583894  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:58.595334  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:58.595513  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:58.670246  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:59.084841  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:59.097253  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:59.097694  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:59.170742  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:46:59.586935  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:46:59.671253  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:46:59.673313  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:46:59.675019  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:00.084801  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:00.095925  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:00.096093  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:00.172659  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:00.584600  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:00.595071  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:00.595966  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:00.596638  145708 pod_ready.go:102] pod "metrics-server-844d8db974-k86gh" in "kube-system" namespace has status "Ready":"False"
	I0717 18:47:00.670954  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:01.084606  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:01.095191  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:01.096227  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:01.171298  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:01.584445  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:01.595908  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:01.596601  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:01.671417  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:02.084333  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:02.094902  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:02.096231  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:02.170905  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:02.584953  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:02.595799  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:02.598334  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:02.599527  145708 pod_ready.go:102] pod "metrics-server-844d8db974-k86gh" in "kube-system" namespace has status "Ready":"False"
	I0717 18:47:02.671258  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:03.085177  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:03.096080  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:03.096149  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:03.172802  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:03.584644  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:03.595461  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:03.595495  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:03.670637  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:04.083665  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:04.095482  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:04.096344  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:04.169707  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:04.585111  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:04.675212  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:04.676125  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:04.677052  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:04.681165  145708 pod_ready.go:102] pod "metrics-server-844d8db974-k86gh" in "kube-system" namespace has status "Ready":"False"
	I0717 18:47:05.085577  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:05.096264  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:05.167276  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:05.172627  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:05.584447  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:05.595713  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:05.596491  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:05.669979  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:06.084322  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:06.095482  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:06.096344  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:06.169834  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:06.584950  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:06.595564  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:06.596983  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:06.672764  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:07.084203  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:07.095396  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:07.096208  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:07.097350  145708 pod_ready.go:102] pod "metrics-server-844d8db974-k86gh" in "kube-system" namespace has status "Ready":"False"
	I0717 18:47:07.170150  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:07.584532  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:07.595658  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:07.595692  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:07.671377  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:08.084401  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:08.095732  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:08.098416  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:08.169854  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:08.584328  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:08.595278  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:08.596621  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:08.670685  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:09.085024  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:09.096875  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:09.163148  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:09.164115  145708 pod_ready.go:102] pod "metrics-server-844d8db974-k86gh" in "kube-system" namespace has status "Ready":"False"
	I0717 18:47:09.170512  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:09.585033  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:09.596373  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:09.596869  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:09.671755  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:10.083945  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:10.095641  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:10.095754  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:10.170657  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:10.584434  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:10.595418  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:10.597926  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:10.671204  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:11.084807  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:11.100049  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:11.167161  145708 pod_ready.go:102] pod "metrics-server-844d8db974-k86gh" in "kube-system" namespace has status "Ready":"False"
	I0717 18:47:11.170256  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:11.179944  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:11.584217  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:11.595908  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:11.596048  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:11.670428  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:12.084196  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:12.130896  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:12.131001  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:12.132979  145708 pod_ready.go:92] pod "metrics-server-844d8db974-k86gh" in "kube-system" namespace has status "Ready":"True"
	I0717 18:47:12.133006  145708 pod_ready.go:81] duration metric: took 13.942946427s waiting for pod "metrics-server-844d8db974-k86gh" in "kube-system" namespace to be "Ready" ...
	I0717 18:47:12.133033  145708 pod_ready.go:38] duration metric: took 15.9478211s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:47:12.133063  145708 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:47:12.133122  145708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:47:12.146511  145708 api_server.go:72] duration metric: took 48.372616513s to wait for apiserver process to appear ...
	I0717 18:47:12.146540  145708 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:47:12.146564  145708 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0717 18:47:12.151160  145708 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0717 18:47:12.152582  145708 api_server.go:141] control plane version: v1.27.3
	I0717 18:47:12.152675  145708 api_server.go:131] duration metric: took 6.126359ms to wait for apiserver health ...
	I0717 18:47:12.152696  145708 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:47:12.162045  145708 system_pods.go:59] 18 kube-system pods found
	I0717 18:47:12.162077  145708 system_pods.go:61] "coredns-5d78c9869d-8sfsd" [b9db8459-a929-459b-97f7-9803217576d1] Running
	I0717 18:47:12.162086  145708 system_pods.go:61] "csi-hostpath-attacher-0" [90435f37-880d-4eed-aff6-431116d06592] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0717 18:47:12.162094  145708 system_pods.go:61] "csi-hostpath-resizer-0" [8cb6e1e4-bd9e-463a-9bf4-c105d9319deb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0717 18:47:12.162102  145708 system_pods.go:61] "csi-hostpathplugin-q7pms" [ae808b5a-2d71-4d6d-9086-063116dc4b42] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0717 18:47:12.162107  145708 system_pods.go:61] "etcd-addons-646610" [e19c9dd5-c6fb-4fa2-ab08-439f8a988a50] Running
	I0717 18:47:12.162112  145708 system_pods.go:61] "kindnet-llwzl" [c7c79cb4-d07e-420a-a8a3-00898e4c7716] Running
	I0717 18:47:12.162116  145708 system_pods.go:61] "kube-apiserver-addons-646610" [c5c6766e-92db-4d9d-b4d5-c32f83196666] Running
	I0717 18:47:12.162122  145708 system_pods.go:61] "kube-controller-manager-addons-646610" [6ac313c0-a407-4fd6-b85d-461ff6f8f222] Running
	I0717 18:47:12.162126  145708 system_pods.go:61] "kube-ingress-dns-minikube" [15767547-b88a-45f1-a559-d3eab693cef4] Running
	I0717 18:47:12.162130  145708 system_pods.go:61] "kube-proxy-rh6wx" [49ab3829-7bb4-45b3-ba5b-ad930c4deb2b] Running
	I0717 18:47:12.162134  145708 system_pods.go:61] "kube-scheduler-addons-646610" [1636f05a-c961-4e2b-9ceb-ac700df891a3] Running
	I0717 18:47:12.162141  145708 system_pods.go:61] "metrics-server-844d8db974-k86gh" [c2507465-6216-4f38-b0e4-4479e08476b1] Running
	I0717 18:47:12.162146  145708 system_pods.go:61] "registry-pjmqt" [9129065f-dc8e-4ce0-812c-402a1e000938] Running
	I0717 18:47:12.162154  145708 system_pods.go:61] "registry-proxy-rxnkn" [f2533dac-e61a-494a-8898-7192c27a8c23] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0717 18:47:12.162161  145708 system_pods.go:61] "snapshot-controller-75bbb956b9-9tvk5" [92e32e7f-4b2a-4946-9e1a-aa5156510375] Running
	I0717 18:47:12.162169  145708 system_pods.go:61] "snapshot-controller-75bbb956b9-nzrzq" [b4bb0266-dec3-4b0b-af24-e79b35a8b5c3] Running
	I0717 18:47:12.162173  145708 system_pods.go:61] "storage-provisioner" [65854256-c652-439e-8ca7-5c2ffb134bfe] Running
	I0717 18:47:12.162177  145708 system_pods.go:61] "tiller-deploy-6847666dc-rfbl2" [ef135e71-be72-4bee-9e5a-486848ace98b] Running
	I0717 18:47:12.162183  145708 system_pods.go:74] duration metric: took 9.472093ms to wait for pod list to return data ...
	I0717 18:47:12.162193  145708 default_sa.go:34] waiting for default service account to be created ...
	I0717 18:47:12.165115  145708 default_sa.go:45] found service account: "default"
	I0717 18:47:12.165143  145708 default_sa.go:55] duration metric: took 2.944103ms for default service account to be created ...
	I0717 18:47:12.165153  145708 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 18:47:12.170476  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:12.175942  145708 system_pods.go:86] 18 kube-system pods found
	I0717 18:47:12.176000  145708 system_pods.go:89] "coredns-5d78c9869d-8sfsd" [b9db8459-a929-459b-97f7-9803217576d1] Running
	I0717 18:47:12.176016  145708 system_pods.go:89] "csi-hostpath-attacher-0" [90435f37-880d-4eed-aff6-431116d06592] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0717 18:47:12.176025  145708 system_pods.go:89] "csi-hostpath-resizer-0" [8cb6e1e4-bd9e-463a-9bf4-c105d9319deb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0717 18:47:12.176036  145708 system_pods.go:89] "csi-hostpathplugin-q7pms" [ae808b5a-2d71-4d6d-9086-063116dc4b42] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0717 18:47:12.176045  145708 system_pods.go:89] "etcd-addons-646610" [e19c9dd5-c6fb-4fa2-ab08-439f8a988a50] Running
	I0717 18:47:12.176052  145708 system_pods.go:89] "kindnet-llwzl" [c7c79cb4-d07e-420a-a8a3-00898e4c7716] Running
	I0717 18:47:12.176059  145708 system_pods.go:89] "kube-apiserver-addons-646610" [c5c6766e-92db-4d9d-b4d5-c32f83196666] Running
	I0717 18:47:12.176072  145708 system_pods.go:89] "kube-controller-manager-addons-646610" [6ac313c0-a407-4fd6-b85d-461ff6f8f222] Running
	I0717 18:47:12.176083  145708 system_pods.go:89] "kube-ingress-dns-minikube" [15767547-b88a-45f1-a559-d3eab693cef4] Running
	I0717 18:47:12.176087  145708 system_pods.go:89] "kube-proxy-rh6wx" [49ab3829-7bb4-45b3-ba5b-ad930c4deb2b] Running
	I0717 18:47:12.176091  145708 system_pods.go:89] "kube-scheduler-addons-646610" [1636f05a-c961-4e2b-9ceb-ac700df891a3] Running
	I0717 18:47:12.176095  145708 system_pods.go:89] "metrics-server-844d8db974-k86gh" [c2507465-6216-4f38-b0e4-4479e08476b1] Running
	I0717 18:47:12.176102  145708 system_pods.go:89] "registry-pjmqt" [9129065f-dc8e-4ce0-812c-402a1e000938] Running
	I0717 18:47:12.176110  145708 system_pods.go:89] "registry-proxy-rxnkn" [f2533dac-e61a-494a-8898-7192c27a8c23] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0717 18:47:12.176117  145708 system_pods.go:89] "snapshot-controller-75bbb956b9-9tvk5" [92e32e7f-4b2a-4946-9e1a-aa5156510375] Running
	I0717 18:47:12.176123  145708 system_pods.go:89] "snapshot-controller-75bbb956b9-nzrzq" [b4bb0266-dec3-4b0b-af24-e79b35a8b5c3] Running
	I0717 18:47:12.176130  145708 system_pods.go:89] "storage-provisioner" [65854256-c652-439e-8ca7-5c2ffb134bfe] Running
	I0717 18:47:12.176138  145708 system_pods.go:89] "tiller-deploy-6847666dc-rfbl2" [ef135e71-be72-4bee-9e5a-486848ace98b] Running
	I0717 18:47:12.176151  145708 system_pods.go:126] duration metric: took 10.992459ms to wait for k8s-apps to be running ...
	I0717 18:47:12.176166  145708 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 18:47:12.176211  145708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:47:12.188678  145708 system_svc.go:56] duration metric: took 12.500392ms WaitForService to wait for kubelet.
	I0717 18:47:12.188712  145708 kubeadm.go:581] duration metric: took 48.414823914s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 18:47:12.188739  145708 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:47:12.279817  145708 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0717 18:47:12.279858  145708 node_conditions.go:123] node cpu capacity is 8
	I0717 18:47:12.279877  145708 node_conditions.go:105] duration metric: took 91.131883ms to run NodePressure ...
	I0717 18:47:12.279895  145708 start.go:228] waiting for startup goroutines ...
	I0717 18:47:12.584115  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:12.594883  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:12.595909  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:12.670772  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:13.084467  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:13.095522  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:13.096974  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:13.171180  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:13.584138  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:13.599722  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:13.600866  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:13.669668  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:14.085165  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:14.094921  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:14.095996  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:14.170982  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:14.584737  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:14.596719  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:14.597019  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:14.669862  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:15.084610  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:15.095542  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:15.096750  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:15.170813  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:15.584517  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:15.595590  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:15.595795  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:15.670421  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:16.083690  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:16.095563  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:16.095581  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:16.170196  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:16.584292  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:16.595107  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:16.597297  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:16.670316  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:17.084120  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:17.095148  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:17.096018  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:17.170317  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:17.584205  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:17.595599  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:17.596298  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:17.669851  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:18.084729  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:18.095523  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:18.095620  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:18.169945  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:18.584737  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:18.595265  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:18.596125  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:18.670558  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:19.084534  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:19.095823  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:19.096418  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:19.169976  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:19.589952  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:19.599167  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:19.600046  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:19.672266  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:20.084750  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:20.096639  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:20.096842  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:20.171217  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:20.584404  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:20.597002  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:20.597014  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:20.670952  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:21.085261  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:21.095706  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:21.096338  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0717 18:47:21.170629  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:21.583797  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:21.595868  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:21.595958  145708 kapi.go:107] duration metric: took 52.009976006s to wait for kubernetes.io/minikube-addons=registry ...
	I0717 18:47:21.670375  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:22.084714  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:22.095070  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:22.171185  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:22.584641  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:22.595853  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:22.670585  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:23.084385  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:23.094492  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:23.169814  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:23.583308  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:23.595113  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:23.670702  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:24.083722  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:24.095034  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:24.169898  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:24.585270  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:24.594490  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:24.671012  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:25.084508  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:25.094656  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:25.171145  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:25.584203  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:25.594477  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:25.670945  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:26.084572  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:26.095117  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:26.170403  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:26.583898  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:26.595335  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:26.670496  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:27.084430  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:27.094860  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:27.170377  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0717 18:47:27.583718  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:27.594886  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:27.670071  145708 kapi.go:107] duration metric: took 54.093217822s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0717 18:47:28.084173  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:28.094651  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:28.584484  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:28.594817  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:29.084421  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:29.095689  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:29.584975  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:29.596914  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:30.084353  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:30.095301  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:30.675992  145708 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0717 18:47:30.676858  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:31.084103  145708 kapi.go:107] duration metric: took 53.506447991s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0717 18:47:31.086319  145708 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-646610 cluster.
	I0717 18:47:31.088242  145708 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0717 18:47:31.090991  145708 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0717 18:47:31.098048  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:31.665798  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:32.096298  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:32.594944  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:33.096636  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:33.595218  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:34.095237  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:34.595443  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:35.095307  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:35.594566  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:36.095037  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:36.594782  145708 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0717 18:47:37.095264  145708 kapi.go:107] duration metric: took 1m7.513026234s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0717 18:47:37.097544  145708 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, inspektor-gadget, ingress-dns, default-storageclass, helm-tiller, metrics-server, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I0717 18:47:37.099381  145708 addons.go:502] enable addons completed in 1m13.925651268s: enabled=[storage-provisioner cloud-spanner inspektor-gadget ingress-dns default-storageclass helm-tiller metrics-server volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I0717 18:47:37.099435  145708 start.go:233] waiting for cluster config update ...
	I0717 18:47:37.099454  145708 start.go:242] writing updated cluster config ...
	I0717 18:47:37.099773  145708 ssh_runner.go:195] Run: rm -f paused
	I0717 18:47:37.151195  145708 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 18:47:37.153359  145708 out.go:177] * Done! kubectl is now configured to use "addons-646610" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Jul 17 18:50:11 addons-646610 crio[944]: time="2023-07-17 18:50:11.828370362Z" level=info msg="Pulled image: gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea" id=31e1a1fd-843e-468b-b59c-ef348f35eb2c name=/runtime.v1.ImageService/PullImage
	Jul 17 18:50:11 addons-646610 crio[944]: time="2023-07-17 18:50:11.829125595Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=8ee4ed0a-4514-4ec8-9c40-8deaf45bb696 name=/runtime.v1.ImageService/ImageStatus
	Jul 17 18:50:11 addons-646610 crio[944]: time="2023-07-17 18:50:11.829774323Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea],Size_:28496999,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=8ee4ed0a-4514-4ec8-9c40-8deaf45bb696 name=/runtime.v1.ImageService/ImageStatus
	Jul 17 18:50:11 addons-646610 crio[944]: time="2023-07-17 18:50:11.830701182Z" level=info msg="Creating container: default/hello-world-app-65bdb79f98-lkc96/hello-world-app" id=f71d4b31-e64e-4a19-8155-5c3ca46511fa name=/runtime.v1.RuntimeService/CreateContainer
	Jul 17 18:50:11 addons-646610 crio[944]: time="2023-07-17 18:50:11.830812109Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 17 18:50:11 addons-646610 crio[944]: time="2023-07-17 18:50:11.906043871Z" level=info msg="Created container b09fa163d6d006992a0459bd76e3dd05d1dfa55ac27e09252d3779a8f80bfff5: default/hello-world-app-65bdb79f98-lkc96/hello-world-app" id=f71d4b31-e64e-4a19-8155-5c3ca46511fa name=/runtime.v1.RuntimeService/CreateContainer
	Jul 17 18:50:11 addons-646610 crio[944]: time="2023-07-17 18:50:11.906626687Z" level=info msg="Starting container: b09fa163d6d006992a0459bd76e3dd05d1dfa55ac27e09252d3779a8f80bfff5" id=5422296e-5798-4b9c-a19c-71c75f9335b1 name=/runtime.v1.RuntimeService/StartContainer
	Jul 17 18:50:11 addons-646610 crio[944]: time="2023-07-17 18:50:11.915642078Z" level=info msg="Started container" PID=9408 containerID=b09fa163d6d006992a0459bd76e3dd05d1dfa55ac27e09252d3779a8f80bfff5 description=default/hello-world-app-65bdb79f98-lkc96/hello-world-app id=5422296e-5798-4b9c-a19c-71c75f9335b1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ed06e2c17ec6fe2f2f898c66aeb478c91f51bfa1eb18f4fb3ffa23ea65c49570
	Jul 17 18:50:12 addons-646610 crio[944]: time="2023-07-17 18:50:12.145012054Z" level=info msg="Removing container: 1c8684be80ccf5e745da3de1bdbdd63f3e7f68d95a5047f9f33cfc31e4d43577" id=a0ac4d30-b783-41dc-82cc-243cecb71c91 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 17 18:50:12 addons-646610 crio[944]: time="2023-07-17 18:50:12.164521982Z" level=info msg="Removed container 1c8684be80ccf5e745da3de1bdbdd63f3e7f68d95a5047f9f33cfc31e4d43577: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=a0ac4d30-b783-41dc-82cc-243cecb71c91 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 17 18:50:12 addons-646610 crio[944]: time="2023-07-17 18:50:12.795664717Z" level=info msg="Stopping container: e287ab3617220ae3a0174c47b32fba57dbfc22fb2b6d162f0ef99a0d87646492 (timeout: 1s)" id=3ad3d407-93aa-4de0-b008-9efaf4006ad1 name=/runtime.v1.RuntimeService/StopContainer
	Jul 17 18:50:13 addons-646610 crio[944]: time="2023-07-17 18:50:13.806286296Z" level=warning msg="Stopping container e287ab3617220ae3a0174c47b32fba57dbfc22fb2b6d162f0ef99a0d87646492 with stop signal timed out: timeout reached after 1 seconds waiting for container process to exit" id=3ad3d407-93aa-4de0-b008-9efaf4006ad1 name=/runtime.v1.RuntimeService/StopContainer
	Jul 17 18:50:13 addons-646610 conmon[6031]: conmon e287ab3617220ae3a017 <ninfo>: container 6042 exited with status 137
	Jul 17 18:50:13 addons-646610 crio[944]: time="2023-07-17 18:50:13.953271653Z" level=info msg="Stopped container e287ab3617220ae3a0174c47b32fba57dbfc22fb2b6d162f0ef99a0d87646492: ingress-nginx/ingress-nginx-controller-7799c6795f-zvklb/controller" id=3ad3d407-93aa-4de0-b008-9efaf4006ad1 name=/runtime.v1.RuntimeService/StopContainer
	Jul 17 18:50:13 addons-646610 crio[944]: time="2023-07-17 18:50:13.953789710Z" level=info msg="Stopping pod sandbox: da083507abb181f0da64f0788d500583d03f21e4706bdbadf3ca341edbc9238e" id=7a37e4c8-f52f-4edb-a6bc-aac42a31dc2c name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 18:50:13 addons-646610 crio[944]: time="2023-07-17 18:50:13.956739601Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-CP6ULYJYHDTRDAIU - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-BFHNKBT35FTLCB5R - [0:0]\n-X KUBE-HP-BFHNKBT35FTLCB5R\n-X KUBE-HP-CP6ULYJYHDTRDAIU\nCOMMIT\n"
	Jul 17 18:50:13 addons-646610 crio[944]: time="2023-07-17 18:50:13.958069452Z" level=info msg="Closing host port tcp:80"
	Jul 17 18:50:13 addons-646610 crio[944]: time="2023-07-17 18:50:13.958108594Z" level=info msg="Closing host port tcp:443"
	Jul 17 18:50:13 addons-646610 crio[944]: time="2023-07-17 18:50:13.959574662Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jul 17 18:50:13 addons-646610 crio[944]: time="2023-07-17 18:50:13.959592727Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jul 17 18:50:13 addons-646610 crio[944]: time="2023-07-17 18:50:13.959713409Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7799c6795f-zvklb Namespace:ingress-nginx ID:da083507abb181f0da64f0788d500583d03f21e4706bdbadf3ca341edbc9238e UID:f7579526-5ea3-4574-a1e0-20889b276869 NetNS:/var/run/netns/28595603-5484-411b-8f4a-19395e9937fa Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 17 18:50:13 addons-646610 crio[944]: time="2023-07-17 18:50:13.959822481Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7799c6795f-zvklb from CNI network \"kindnet\" (type=ptp)"
	Jul 17 18:50:13 addons-646610 crio[944]: time="2023-07-17 18:50:13.997457101Z" level=info msg="Stopped pod sandbox: da083507abb181f0da64f0788d500583d03f21e4706bdbadf3ca341edbc9238e" id=7a37e4c8-f52f-4edb-a6bc-aac42a31dc2c name=/runtime.v1.RuntimeService/StopPodSandbox
	Jul 17 18:50:14 addons-646610 crio[944]: time="2023-07-17 18:50:14.152520774Z" level=info msg="Removing container: e287ab3617220ae3a0174c47b32fba57dbfc22fb2b6d162f0ef99a0d87646492" id=1a8407d7-0e1a-4c86-81ef-f1ab8d04dfb1 name=/runtime.v1.RuntimeService/RemoveContainer
	Jul 17 18:50:14 addons-646610 crio[944]: time="2023-07-17 18:50:14.168162882Z" level=info msg="Removed container e287ab3617220ae3a0174c47b32fba57dbfc22fb2b6d162f0ef99a0d87646492: ingress-nginx/ingress-nginx-controller-7799c6795f-zvklb/controller" id=1a8407d7-0e1a-4c86-81ef-f1ab8d04dfb1 name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b09fa163d6d00       gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea                      8 seconds ago       Running             hello-world-app           0                   ed06e2c17ec6f       hello-world-app-65bdb79f98-lkc96
	9d5c2a40a8e86       docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6                              2 minutes ago       Running             nginx                     0                   a9bbf8fc8414e       nginx
	4140822eaa6a3       ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45                        2 minutes ago       Running             headlamp                  0                   652da0f71ed99       headlamp-66f6498c69-s9c9w
	0cb75ca7d7892       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   e6fea66696339       gcp-auth-58478865f7-v548d
	06940baae1416       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d   3 minutes ago       Exited              patch                     0                   9ad0fca255355       ingress-nginx-admission-patch-tmghk
	dc831214f38bd       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d   3 minutes ago       Exited              create                    0                   aa25b860aea64       ingress-nginx-admission-create-pspvt
	b40129421a9e6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       0                   22963c117500c       storage-provisioner
	3404fcbdaeeb9       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             3 minutes ago       Running             coredns                   0                   c4d8d80e9e634       coredns-5d78c9869d-8sfsd
	14006a15afc74       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c                                                             3 minutes ago       Running             kube-proxy                0                   790abed6c51ae       kube-proxy-rh6wx
	06af6d5762cc3       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da                                                             3 minutes ago       Running             kindnet-cni               0                   a30194c5e4ef4       kindnet-llwzl
	4566b3dcb2649       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f                                                             4 minutes ago       Running             kube-controller-manager   0                   a68796e921e84       kube-controller-manager-addons-646610
	d7e9cea5313ae       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a                                                             4 minutes ago       Running             kube-apiserver            0                   8a8c12935ea55       kube-apiserver-addons-646610
	7ed07734c5edc       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681                                                             4 minutes ago       Running             etcd                      0                   f11ad106490be       etcd-addons-646610
	f1ae25327ac7a       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a                                                             4 minutes ago       Running             kube-scheduler            0                   6ba1e00dbdbba       kube-scheduler-addons-646610
	
	* 
	* ==> coredns [3404fcbdaeeb9f47a484e695a511efa4ccd131cb95b5bb462801bbcd71ecef7a] <==
	* [INFO] 10.244.0.15:33143 - 29922 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000144754s
	[INFO] 10.244.0.15:45855 - 59788 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.006010997s
	[INFO] 10.244.0.15:45855 - 2696 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.006979448s
	[INFO] 10.244.0.15:51401 - 46579 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005391006s
	[INFO] 10.244.0.15:51401 - 6128 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.006457205s
	[INFO] 10.244.0.15:42694 - 14268 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005246345s
	[INFO] 10.244.0.15:42694 - 63418 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006517022s
	[INFO] 10.244.0.15:40090 - 50068 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000090448s
	[INFO] 10.244.0.15:40090 - 3236 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000103116s
	[INFO] 10.244.0.18:51563 - 21565 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000184864s
	[INFO] 10.244.0.18:45821 - 62945 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000250364s
	[INFO] 10.244.0.18:39202 - 50989 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00011534s
	[INFO] 10.244.0.18:58327 - 6956 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000219549s
	[INFO] 10.244.0.18:54523 - 44939 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000127137s
	[INFO] 10.244.0.18:52392 - 52686 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000178776s
	[INFO] 10.244.0.18:50404 - 17477 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.007136551s
	[INFO] 10.244.0.18:46634 - 20847 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.007154853s
	[INFO] 10.244.0.18:40828 - 3641 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.004909182s
	[INFO] 10.244.0.18:35154 - 61336 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005278813s
	[INFO] 10.244.0.18:53445 - 46375 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006455674s
	[INFO] 10.244.0.18:44340 - 38941 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007071004s
	[INFO] 10.244.0.18:53188 - 28071 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 382 0.000621392s
	[INFO] 10.244.0.18:38787 - 53968 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000728271s
	[INFO] 10.244.0.20:45212 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000119585s
	[INFO] 10.244.0.20:47435 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00006782s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-646610
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-646610
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5
	                    minikube.k8s.io/name=addons-646610
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T18_46_10_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-646610
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 18:46:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-646610
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 18:50:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 18:48:12 +0000   Mon, 17 Jul 2023 18:46:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 18:48:12 +0000   Mon, 17 Jul 2023 18:46:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 18:48:12 +0000   Mon, 17 Jul 2023 18:46:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 18:48:12 +0000   Mon, 17 Jul 2023 18:46:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-646610
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	System Info:
	  Machine ID:                 64887c77b9a849b6aa0368657aee11f1
	  System UUID:                d6ee2e3a-e692-4aeb-901b-485bc3d722a3
	  Boot ID:                    72066744-0b12-457f-a61f-5086cdf4a210
	  Kernel Version:             5.15.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-65bdb79f98-lkc96         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m29s
	  gcp-auth                    gcp-auth-58478865f7-v548d                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	  headlamp                    headlamp-66f6498c69-s9c9w                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m42s
	  kube-system                 coredns-5d78c9869d-8sfsd                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m58s
	  kube-system                 etcd-addons-646610                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m10s
	  kube-system                 kindnet-llwzl                            100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m57s
	  kube-system                 kube-apiserver-addons-646610             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 kube-controller-manager-addons-646610    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 kube-proxy-rh6wx                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	  kube-system                 kube-scheduler-addons-646610             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m11s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m53s  kube-proxy       
	  Normal  Starting                 4m11s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m11s  kubelet          Node addons-646610 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m11s  kubelet          Node addons-646610 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m11s  kubelet          Node addons-646610 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m58s  node-controller  Node addons-646610 event: Registered Node addons-646610 in Controller
	  Normal  NodeReady                3m24s  kubelet          Node addons-646610 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Jul17 18:21] kauditd_printk_skb: 3 callbacks suppressed
	[Jul17 18:33] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev virbr0
	[  +0.000013] ll header: 00000000: 52 54 00 10 a2 1d 52 54 00 10 62 21 08 00
	[  +1.005574] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev virbr0
	[  +0.000008] ll header: 00000000: 52 54 00 10 a2 1d 52 54 00 10 62 21 08 00
	[  +2.047792] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev virbr0
	[  +0.000007] ll header: 00000000: 52 54 00 10 a2 1d 52 54 00 10 62 21 08 00
	[  +4.031679] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev virbr0
	[  +0.000008] ll header: 00000000: 52 54 00 10 a2 1d 52 54 00 10 62 21 08 00
	[  +8.064591] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev virbr0
	[  +0.000008] ll header: 00000000: 52 54 00 10 a2 1d 52 54 00 10 62 21 08 00
	[Jul17 18:47] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 02 bd 37 fb 65 de 16 13 a9 8c 96 ea 08 00
	[  +1.019708] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 02 bd 37 fb 65 de 16 13 a9 8c 96 ea 08 00
	[Jul17 18:48] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 02 bd 37 fb 65 de 16 13 a9 8c 96 ea 08 00
	[  +4.031690] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 02 bd 37 fb 65 de 16 13 a9 8c 96 ea 08 00
	[  +8.191428] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 02 bd 37 fb 65 de 16 13 a9 8c 96 ea 08 00
	[ +16.126790] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 02 bd 37 fb 65 de 16 13 a9 8c 96 ea 08 00
	[Jul17 18:49] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 02 bd 37 fb 65 de 16 13 a9 8c 96 ea 08 00
	
	* 
	* ==> etcd [7ed07734c5edc7d5a48137909f7541b0d7fdb3a849735182df205c6d13d7777a] <==
	* {"level":"info","ts":"2023-07-17T18:46:04.582Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-17T18:46:04.582Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T18:46:04.582Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T18:46:04.583Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T18:46:04.583Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T18:46:04.583Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-07-17T18:46:04.583Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-17T18:46:26.470Z","caller":"traceutil/trace.go:171","msg":"trace[1818594762] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"102.129357ms","start":"2023-07-17T18:46:26.368Z","end":"2023-07-17T18:46:26.470Z","steps":["trace[1818594762] 'compare'  (duration: 96.941455ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T18:46:26.473Z","caller":"traceutil/trace.go:171","msg":"trace[154573575] transaction","detail":"{read_only:false; response_revision:394; number_of_response:1; }","duration":"104.551045ms","start":"2023-07-17T18:46:26.368Z","end":"2023-07-17T18:46:26.473Z","steps":["trace[154573575] 'process raft request'  (duration: 101.564572ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T18:46:26.475Z","caller":"traceutil/trace.go:171","msg":"trace[552917591] transaction","detail":"{read_only:false; response_revision:396; number_of_response:1; }","duration":"104.511877ms","start":"2023-07-17T18:46:26.370Z","end":"2023-07-17T18:46:26.475Z","steps":["trace[552917591] 'process raft request'  (duration: 103.990058ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T18:46:26.475Z","caller":"traceutil/trace.go:171","msg":"trace[1387481235] transaction","detail":"{read_only:false; response_revision:395; number_of_response:1; }","duration":"106.758399ms","start":"2023-07-17T18:46:26.368Z","end":"2023-07-17T18:46:26.475Z","steps":["trace[1387481235] 'process raft request'  (duration: 101.411913ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T18:46:26.477Z","caller":"traceutil/trace.go:171","msg":"trace[925123780] transaction","detail":"{read_only:false; response_revision:397; number_of_response:1; }","duration":"104.612407ms","start":"2023-07-17T18:46:26.373Z","end":"2023-07-17T18:46:26.477Z","steps":["trace[925123780] 'process raft request'  (duration: 101.62732ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T18:46:26.477Z","caller":"traceutil/trace.go:171","msg":"trace[916512637] linearizableReadLoop","detail":"{readStateIndex:408; appliedIndex:406; }","duration":"108.966938ms","start":"2023-07-17T18:46:26.368Z","end":"2023-07-17T18:46:26.477Z","steps":["trace[916512637] 'read index received'  (duration: 4.1784ms)","trace[916512637] 'applied index is now lower than readState.Index'  (duration: 104.786945ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-17T18:46:26.477Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.076177ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/servicenodeports\" ","response":"range_response_count:1 size:316"}
	{"level":"info","ts":"2023-07-17T18:46:26.568Z","caller":"traceutil/trace.go:171","msg":"trace[1713169888] range","detail":"{range_begin:/registry/ranges/servicenodeports; range_end:; response_count:1; response_revision:399; }","duration":"199.306569ms","start":"2023-07-17T18:46:26.368Z","end":"2023-07-17T18:46:26.568Z","steps":["trace[1713169888] 'agreement among raft nodes before linearized reading'  (duration: 109.010025ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T18:46:32.279Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.108982ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/kube-system/csi-resizer-role-cfg\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-17T18:46:32.279Z","caller":"traceutil/trace.go:171","msg":"trace[385665475] range","detail":"{range_begin:/registry/rolebindings/kube-system/csi-resizer-role-cfg; range_end:; response_count:0; response_revision:644; }","duration":"103.269828ms","start":"2023-07-17T18:46:32.176Z","end":"2023-07-17T18:46:32.279Z","steps":["trace[385665475] 'range keys from in-memory index tree'  (duration: 102.974529ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-17T18:47:45.014Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.740917ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128022495128701584 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/metrics-server-844d8db974-k86gh\" mod_revision:1144 > success:<request_put:<key:\"/registry/pods/kube-system/metrics-server-844d8db974-k86gh\" value_size:4471 >> failure:<request_range:<key:\"/registry/pods/kube-system/metrics-server-844d8db974-k86gh\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-07-17T18:47:45.014Z","caller":"traceutil/trace.go:171","msg":"trace[2040198650] linearizableReadLoop","detail":"{readStateIndex:1180; appliedIndex:1179; }","duration":"129.33326ms","start":"2023-07-17T18:47:44.885Z","end":"2023-07-17T18:47:45.014Z","steps":["trace[2040198650] 'read index received'  (duration: 10.933547ms)","trace[2040198650] 'applied index is now lower than readState.Index'  (duration: 118.398538ms)"],"step_count":2}
	{"level":"info","ts":"2023-07-17T18:47:45.014Z","caller":"traceutil/trace.go:171","msg":"trace[1461469295] transaction","detail":"{read_only:false; response_revision:1145; number_of_response:1; }","duration":"193.688262ms","start":"2023-07-17T18:47:44.820Z","end":"2023-07-17T18:47:45.014Z","steps":["trace[1461469295] 'process raft request'  (duration: 75.334264ms)","trace[1461469295] 'compare'  (duration: 117.658192ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-17T18:47:45.014Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.439921ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gadget/\" range_end:\"/registry/pods/gadget0\" ","response":"range_response_count:1 size:7517"}
	{"level":"info","ts":"2023-07-17T18:47:45.014Z","caller":"traceutil/trace.go:171","msg":"trace[294239693] range","detail":"{range_begin:/registry/pods/gadget/; range_end:/registry/pods/gadget0; response_count:1; response_revision:1145; }","duration":"129.472957ms","start":"2023-07-17T18:47:44.885Z","end":"2023-07-17T18:47:45.014Z","steps":["trace[294239693] 'agreement among raft nodes before linearized reading'  (duration: 129.374975ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T18:47:50.900Z","caller":"traceutil/trace.go:171","msg":"trace[1428363786] transaction","detail":"{read_only:false; response_revision:1210; number_of_response:1; }","duration":"156.955324ms","start":"2023-07-17T18:47:50.743Z","end":"2023-07-17T18:47:50.900Z","steps":["trace[1428363786] 'process raft request'  (duration: 141.391517ms)","trace[1428363786] 'compare'  (duration: 15.434566ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-17T18:48:13.036Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.584393ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas/\" range_end:\"/registry/resourcequotas0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-17T18:48:13.036Z","caller":"traceutil/trace.go:171","msg":"trace[1686108342] range","detail":"{range_begin:/registry/resourcequotas/; range_end:/registry/resourcequotas0; response_count:0; response_revision:1352; }","duration":"112.666061ms","start":"2023-07-17T18:48:12.924Z","end":"2023-07-17T18:48:13.036Z","steps":["trace[1686108342] 'count revisions from in-memory index tree'  (duration: 112.484564ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [0cb75ca7d7892ea53150582ba24dcce1bc878598fecb13255a5db91e82b6636e] <==
	* 2023/07/17 18:47:29 GCP Auth Webhook started!
	2023/07/17 18:47:38 Ready to marshal response ...
	2023/07/17 18:47:38 Ready to write response ...
	2023/07/17 18:47:38 Ready to marshal response ...
	2023/07/17 18:47:38 Ready to write response ...
	2023/07/17 18:47:38 Ready to marshal response ...
	2023/07/17 18:47:38 Ready to write response ...
	2023/07/17 18:47:47 Ready to marshal response ...
	2023/07/17 18:47:47 Ready to write response ...
	2023/07/17 18:47:48 Ready to marshal response ...
	2023/07/17 18:47:48 Ready to write response ...
	2023/07/17 18:47:51 Ready to marshal response ...
	2023/07/17 18:47:51 Ready to write response ...
	2023/07/17 18:47:59 Ready to marshal response ...
	2023/07/17 18:47:59 Ready to write response ...
	2023/07/17 18:48:25 Ready to marshal response ...
	2023/07/17 18:48:25 Ready to write response ...
	2023/07/17 18:50:10 Ready to marshal response ...
	2023/07/17 18:50:10 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  18:50:21 up  3:32,  0 users,  load average: 0.41, 1.79, 2.55
	Linux addons-646610 5.15.0-1037-gcp #45~20.04.1-Ubuntu SMP Thu Jun 22 08:31:09 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [06af6d5762cc328a6b93e1ecb0c583f0ff41c6092b5fc026cf538adc864d05b7] <==
	* I0717 18:48:16.105508       1 main.go:227] handling current node
	I0717 18:48:26.115357       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 18:48:26.115379       1 main.go:227] handling current node
	I0717 18:48:36.120838       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 18:48:36.120866       1 main.go:227] handling current node
	I0717 18:48:46.124445       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 18:48:46.124468       1 main.go:227] handling current node
	I0717 18:48:56.128454       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 18:48:56.128480       1 main.go:227] handling current node
	I0717 18:49:06.141173       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 18:49:06.141204       1 main.go:227] handling current node
	I0717 18:49:16.144850       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 18:49:16.144872       1 main.go:227] handling current node
	I0717 18:49:26.157434       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 18:49:26.157457       1 main.go:227] handling current node
	I0717 18:49:36.161906       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 18:49:36.161929       1 main.go:227] handling current node
	I0717 18:49:46.165521       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 18:49:46.165545       1 main.go:227] handling current node
	I0717 18:49:56.177582       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 18:49:56.177607       1 main.go:227] handling current node
	I0717 18:50:06.188697       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 18:50:06.188718       1 main.go:227] handling current node
	I0717 18:50:16.193053       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 18:50:16.193076       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [d7e9cea5313ae86b84afaeecc0f2143897bb197071e316eeadbaca48c64c7ae0] <==
	* I0717 18:48:41.588550       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 18:48:41.588704       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 18:48:41.595535       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 18:48:41.595607       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 18:48:41.604757       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 18:48:41.605754       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 18:48:41.608813       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 18:48:41.608865       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 18:48:41.681448       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 18:48:41.681535       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 18:48:41.683263       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 18:48:41.683397       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 18:48:41.775042       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 18:48:41.775109       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0717 18:48:41.784193       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0717 18:48:41.784245       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0717 18:48:42.610745       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0717 18:48:42.784355       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0717 18:48:42.789409       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0717 18:49:17.795921       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0717 18:49:17.795953       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 18:49:17.796022       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 18:49:17.796032       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 18:50:10.577602       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs=map[IPv4:10.96.48.165]
	
	* 
	* ==> kube-controller-manager [4566b3dcb2649fcfcb4c449ac778bc69672a2e7fd9cbc9dc0b61388adf36124b] <==
	* E0717 18:48:54.089536       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 18:49:01.423806       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 18:49:01.423845       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 18:49:03.295233       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 18:49:03.295271       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 18:49:03.636680       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 18:49:03.636714       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 18:49:14.687343       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 18:49:14.687376       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 18:49:19.851267       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 18:49:19.851307       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 18:49:23.951938       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 18:49:23.951989       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 18:49:46.050947       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 18:49:46.050982       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 18:49:47.558106       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 18:49:47.558137       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 18:50:00.543867       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 18:50:00.543907       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0717 18:50:07.845189       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0717 18:50:07.845230       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0717 18:50:10.425511       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-65bdb79f98 to 1"
	I0717 18:50:10.439229       1 event.go:307] "Event occurred" object="default/hello-world-app-65bdb79f98" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-65bdb79f98-lkc96"
	I0717 18:50:12.783767       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-create
	I0717 18:50:12.791786       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	
	* 
	* ==> kube-proxy [14006a15afc74266e1cfbd68bfe4b881c2452f6e696c44dfc8dbc5a8ba658706] <==
	* I0717 18:46:26.679965       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0717 18:46:26.680117       1 server_others.go:110] "Detected node IP" address="192.168.49.2"
	I0717 18:46:26.680153       1 server_others.go:554] "Using iptables proxy"
	I0717 18:46:27.280956       1 server_others.go:192] "Using iptables Proxier"
	I0717 18:46:27.281072       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0717 18:46:27.281120       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0717 18:46:27.281168       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0717 18:46:27.281229       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 18:46:27.281935       1 server.go:658] "Version info" version="v1.27.3"
	I0717 18:46:27.282404       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 18:46:27.283677       1 config.go:188] "Starting service config controller"
	I0717 18:46:27.362417       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 18:46:27.362114       1 config.go:315] "Starting node config controller"
	I0717 18:46:27.362593       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 18:46:27.362160       1 config.go:97] "Starting endpoint slice config controller"
	I0717 18:46:27.362643       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 18:46:27.467923       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0717 18:46:27.468013       1 shared_informer.go:318] Caches are synced for service config
	I0717 18:46:27.468165       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [f1ae25327ac7a76c7110c883c7e9cd452278833e21b5d197b94dcb819ba2be5d] <==
	* W0717 18:46:07.067384       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0717 18:46:07.067391       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 18:46:07.067404       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 18:46:07.067413       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 18:46:07.067427       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 18:46:07.067346       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 18:46:07.067452       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 18:46:07.067457       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 18:46:07.067362       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 18:46:07.067469       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 18:46:07.067268       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 18:46:07.067506       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 18:46:07.067509       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 18:46:07.067527       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 18:46:07.067567       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 18:46:07.067587       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 18:46:07.067589       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 18:46:07.067602       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 18:46:07.967777       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 18:46:07.967829       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 18:46:08.017611       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 18:46:08.017640       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 18:46:08.050950       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 18:46:08.051063       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0717 18:46:08.663898       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Jul 17 18:50:11 addons-646610 kubelet[1557]: I0717 18:50:11.618683    1557 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9s2kg\" (UniqueName: \"kubernetes.io/projected/15767547-b88a-45f1-a559-d3eab693cef4-kube-api-access-9s2kg\") pod \"15767547-b88a-45f1-a559-d3eab693cef4\" (UID: \"15767547-b88a-45f1-a559-d3eab693cef4\") "
	Jul 17 18:50:11 addons-646610 kubelet[1557]: I0717 18:50:11.620981    1557 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15767547-b88a-45f1-a559-d3eab693cef4-kube-api-access-9s2kg" (OuterVolumeSpecName: "kube-api-access-9s2kg") pod "15767547-b88a-45f1-a559-d3eab693cef4" (UID: "15767547-b88a-45f1-a559-d3eab693cef4"). InnerVolumeSpecName "kube-api-access-9s2kg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 18:50:11 addons-646610 kubelet[1557]: I0717 18:50:11.719272    1557 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9s2kg\" (UniqueName: \"kubernetes.io/projected/15767547-b88a-45f1-a559-d3eab693cef4-kube-api-access-9s2kg\") on node \"addons-646610\" DevicePath \"\""
	Jul 17 18:50:12 addons-646610 kubelet[1557]: I0717 18:50:12.143893    1557 scope.go:115] "RemoveContainer" containerID="1c8684be80ccf5e745da3de1bdbdd63f3e7f68d95a5047f9f33cfc31e4d43577"
	Jul 17 18:50:12 addons-646610 kubelet[1557]: I0717 18:50:12.164833    1557 scope.go:115] "RemoveContainer" containerID="1c8684be80ccf5e745da3de1bdbdd63f3e7f68d95a5047f9f33cfc31e4d43577"
	Jul 17 18:50:12 addons-646610 kubelet[1557]: E0717 18:50:12.165376    1557 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c8684be80ccf5e745da3de1bdbdd63f3e7f68d95a5047f9f33cfc31e4d43577\": container with ID starting with 1c8684be80ccf5e745da3de1bdbdd63f3e7f68d95a5047f9f33cfc31e4d43577 not found: ID does not exist" containerID="1c8684be80ccf5e745da3de1bdbdd63f3e7f68d95a5047f9f33cfc31e4d43577"
	Jul 17 18:50:12 addons-646610 kubelet[1557]: I0717 18:50:12.165434    1557 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:1c8684be80ccf5e745da3de1bdbdd63f3e7f68d95a5047f9f33cfc31e4d43577} err="failed to get container status \"1c8684be80ccf5e745da3de1bdbdd63f3e7f68d95a5047f9f33cfc31e4d43577\": rpc error: code = NotFound desc = could not find container \"1c8684be80ccf5e745da3de1bdbdd63f3e7f68d95a5047f9f33cfc31e4d43577\": container with ID starting with 1c8684be80ccf5e745da3de1bdbdd63f3e7f68d95a5047f9f33cfc31e4d43577 not found: ID does not exist"
	Jul 17 18:50:12 addons-646610 kubelet[1557]: I0717 18:50:12.170305    1557 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-65bdb79f98-lkc96" podStartSLOduration=1.207914198 podCreationTimestamp="2023-07-17 18:50:10 +0000 UTC" firstStartedPulling="2023-07-17 18:50:10.866302352 +0000 UTC m=+241.191560644" lastFinishedPulling="2023-07-17 18:50:11.828637152 +0000 UTC m=+242.153895442" observedRunningTime="2023-07-17 18:50:12.160246692 +0000 UTC m=+242.485504997" watchObservedRunningTime="2023-07-17 18:50:12.170248996 +0000 UTC m=+242.495507301"
	Jul 17 18:50:12 addons-646610 kubelet[1557]: E0717 18:50:12.797470    1557 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7799c6795f-zvklb.1772bc448b1fc300", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7799c6795f-zvklb", UID:"f7579526-5ea3-4574-a1e0-20889b276869", APIVersion:"v1", ResourceVersion:"738", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stopping container controller", Source:v1.EventSource{Componen
t:"kubelet", Host:"addons-646610"}, FirstTimestamp:time.Date(2023, time.July, 17, 18, 50, 12, 795269888, time.Local), LastTimestamp:time.Date(2023, time.July, 17, 18, 50, 12, 795269888, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7799c6795f-zvklb.1772bc448b1fc300" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jul 17 18:50:13 addons-646610 kubelet[1557]: I0717 18:50:13.796526    1557 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=15767547-b88a-45f1-a559-d3eab693cef4 path="/var/lib/kubelet/pods/15767547-b88a-45f1-a559-d3eab693cef4/volumes"
	Jul 17 18:50:13 addons-646610 kubelet[1557]: I0717 18:50:13.796954    1557 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=1b2ab15d-e272-4dd4-9b22-efeec6679ff1 path="/var/lib/kubelet/pods/1b2ab15d-e272-4dd4-9b22-efeec6679ff1/volumes"
	Jul 17 18:50:13 addons-646610 kubelet[1557]: I0717 18:50:13.797310    1557 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=cc6c3ac3-06e7-4201-9ef5-ecd0f6473863 path="/var/lib/kubelet/pods/cc6c3ac3-06e7-4201-9ef5-ecd0f6473863/volumes"
	Jul 17 18:50:14 addons-646610 kubelet[1557]: I0717 18:50:14.151535    1557 scope.go:115] "RemoveContainer" containerID="e287ab3617220ae3a0174c47b32fba57dbfc22fb2b6d162f0ef99a0d87646492"
	Jul 17 18:50:14 addons-646610 kubelet[1557]: I0717 18:50:14.168420    1557 scope.go:115] "RemoveContainer" containerID="e287ab3617220ae3a0174c47b32fba57dbfc22fb2b6d162f0ef99a0d87646492"
	Jul 17 18:50:14 addons-646610 kubelet[1557]: E0717 18:50:14.168811    1557 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e287ab3617220ae3a0174c47b32fba57dbfc22fb2b6d162f0ef99a0d87646492\": container with ID starting with e287ab3617220ae3a0174c47b32fba57dbfc22fb2b6d162f0ef99a0d87646492 not found: ID does not exist" containerID="e287ab3617220ae3a0174c47b32fba57dbfc22fb2b6d162f0ef99a0d87646492"
	Jul 17 18:50:14 addons-646610 kubelet[1557]: I0717 18:50:14.168866    1557 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:cri-o ID:e287ab3617220ae3a0174c47b32fba57dbfc22fb2b6d162f0ef99a0d87646492} err="failed to get container status \"e287ab3617220ae3a0174c47b32fba57dbfc22fb2b6d162f0ef99a0d87646492\": rpc error: code = NotFound desc = could not find container \"e287ab3617220ae3a0174c47b32fba57dbfc22fb2b6d162f0ef99a0d87646492\": container with ID starting with e287ab3617220ae3a0174c47b32fba57dbfc22fb2b6d162f0ef99a0d87646492 not found: ID does not exist"
	Jul 17 18:50:14 addons-646610 kubelet[1557]: I0717 18:50:14.168913    1557 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wv8qz\" (UniqueName: \"kubernetes.io/projected/f7579526-5ea3-4574-a1e0-20889b276869-kube-api-access-wv8qz\") pod \"f7579526-5ea3-4574-a1e0-20889b276869\" (UID: \"f7579526-5ea3-4574-a1e0-20889b276869\") "
	Jul 17 18:50:14 addons-646610 kubelet[1557]: I0717 18:50:14.168961    1557 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f7579526-5ea3-4574-a1e0-20889b276869-webhook-cert\") pod \"f7579526-5ea3-4574-a1e0-20889b276869\" (UID: \"f7579526-5ea3-4574-a1e0-20889b276869\") "
	Jul 17 18:50:14 addons-646610 kubelet[1557]: I0717 18:50:14.170683    1557 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7579526-5ea3-4574-a1e0-20889b276869-kube-api-access-wv8qz" (OuterVolumeSpecName: "kube-api-access-wv8qz") pod "f7579526-5ea3-4574-a1e0-20889b276869" (UID: "f7579526-5ea3-4574-a1e0-20889b276869"). InnerVolumeSpecName "kube-api-access-wv8qz". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 17 18:50:14 addons-646610 kubelet[1557]: I0717 18:50:14.170902    1557 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7579526-5ea3-4574-a1e0-20889b276869-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "f7579526-5ea3-4574-a1e0-20889b276869" (UID: "f7579526-5ea3-4574-a1e0-20889b276869"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 18:50:14 addons-646610 kubelet[1557]: I0717 18:50:14.269191    1557 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wv8qz\" (UniqueName: \"kubernetes.io/projected/f7579526-5ea3-4574-a1e0-20889b276869-kube-api-access-wv8qz\") on node \"addons-646610\" DevicePath \"\""
	Jul 17 18:50:14 addons-646610 kubelet[1557]: I0717 18:50:14.269231    1557 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f7579526-5ea3-4574-a1e0-20889b276869-webhook-cert\") on node \"addons-646610\" DevicePath \"\""
	Jul 17 18:50:14 addons-646610 kubelet[1557]: W0717 18:50:14.733486    1557 container.go:586] Failed to update stats for container "/crio-1c5e91f2e1bea5d4ab7af6835c063462a4ee6d6b4644515d83c064be2d7bfa4b": unable to determine device info for dir: /var/lib/containers/storage/overlay/cae7504df97038d35a20e9410bb9fcad0ad56a369931d4da20a22e48ca065069/diff: stat failed on /var/lib/containers/storage/overlay/cae7504df97038d35a20e9410bb9fcad0ad56a369931d4da20a22e48ca065069/diff with error: no such file or directory, continuing to push stats
	Jul 17 18:50:15 addons-646610 kubelet[1557]: I0717 18:50:15.795829    1557 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=f7579526-5ea3-4574-a1e0-20889b276869 path="/var/lib/kubelet/pods/f7579526-5ea3-4574-a1e0-20889b276869/volumes"
	Jul 17 18:50:19 addons-646610 kubelet[1557]: W0717 18:50:19.090607    1557 container.go:586] Failed to update stats for container "/docker/06535e39775498d0893bc2f8f6b69829874bf1524803ed87dba1533c3b1653b7/crio-9da650c6e3b049607254474fcf7442f684897c0836041c19eb92bf1bb151826a": unable to determine device info for dir: /var/lib/containers/storage/overlay/0624caf4280d4cab16228298fb3771f68d03116b653ee6e93ab0c6c8659a28cb/diff: stat failed on /var/lib/containers/storage/overlay/0624caf4280d4cab16228298fb3771f68d03116b653ee6e93ab0c6c8659a28cb/diff with error: no such file or directory, continuing to push stats
	
	* 
	* ==> storage-provisioner [b40129421a9e6c7ac1e0bc04a3ab4a6122fa3dbd7321c244d76dec3c3df49be7] <==
	* I0717 18:46:57.100940       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 18:46:57.109772       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 18:46:57.109821       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 18:46:57.120635       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 18:46:57.120835       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-646610_ade45997-40be-4f23-b873-0cd62a74493e!
	I0717 18:46:57.120795       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"df770467-143a-4c1d-bdb0-64ef6a35d92a", APIVersion:"v1", ResourceVersion:"819", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-646610_ade45997-40be-4f23-b873-0cd62a74493e became leader
	I0717 18:46:57.221315       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-646610_ade45997-40be-4f23-b873-0cd62a74493e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-646610 -n addons-646610
helpers_test.go:261: (dbg) Run:  kubectl --context addons-646610 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (152.25s)

                                                
                                    
x
+
TestErrorSpam/setup (21.44s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-113120 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-113120 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-113120 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-113120 --driver=docker  --container-runtime=crio: (21.438315474s)
error_spam_test.go:96: unexpected stderr: "! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1"
error_spam_test.go:110: minikube stdout:
* [nospam-113120] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=16890
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/16890-138069/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-138069/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting control plane node nospam-113120 in cluster nospam-113120
* Pulling base image ...
* Creating docker container (CPUs=2, Memory=2250MB) ...
* Preparing Kubernetes v1.27.3 on CRI-O 1.24.6 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring CNI (Container Networking Interface) ...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Verifying Kubernetes components...
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-113120" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
--- FAIL: TestErrorSpam/setup (21.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (11.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-387153
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 image load --daemon gcr.io/google-containers/addon-resizer:functional-387153 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-387153 image load --daemon gcr.io/google-containers/addon-resizer:functional-387153 --alsologtostderr: (8.448019475s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-387153 image ls: (2.227200511s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-387153" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (11.70s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (182.39s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-795879 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-795879 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (16.320302078s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-795879 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-795879 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [0187a1e9-ea1e-4acc-a5af-a0e7d99cd113] Pending
helpers_test.go:344: "nginx" [0187a1e9-ea1e-4acc-a5af-a0e7d99cd113] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [0187a1e9-ea1e-4acc-a5af-a0e7d99cd113] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.01008069s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-795879 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0717 18:57:37.170091  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/client.crt: no such file or directory
E0717 18:58:04.858057  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/client.crt: no such file or directory
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-795879 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.515065609s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-795879 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-795879 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.007338968s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-795879 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-795879 addons disable ingress-dns --alsologtostderr -v=1: (1.513855702s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-795879 addons disable ingress --alsologtostderr -v=1
E0717 18:58:38.047237  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/functional-387153/client.crt: no such file or directory
E0717 18:58:38.052515  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/functional-387153/client.crt: no such file or directory
E0717 18:58:38.062881  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/functional-387153/client.crt: no such file or directory
E0717 18:58:38.083202  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/functional-387153/client.crt: no such file or directory
E0717 18:58:38.123479  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/functional-387153/client.crt: no such file or directory
E0717 18:58:38.203837  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/functional-387153/client.crt: no such file or directory
E0717 18:58:38.364296  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/functional-387153/client.crt: no such file or directory
E0717 18:58:38.684901  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/functional-387153/client.crt: no such file or directory
E0717 18:58:39.325902  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/functional-387153/client.crt: no such file or directory
E0717 18:58:40.606925  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/functional-387153/client.crt: no such file or directory
E0717 18:58:43.168034  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/functional-387153/client.crt: no such file or directory
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-795879 addons disable ingress --alsologtostderr -v=1: (7.388536502s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-795879
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-795879:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "85331f8d3f3b82d198bd6551f0a039b3e945643ef9fa0dcaf8d817aaee089895",
	        "Created": "2023-07-17T18:54:37.303845938Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 183415,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T18:54:37.586583103Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/85331f8d3f3b82d198bd6551f0a039b3e945643ef9fa0dcaf8d817aaee089895/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/85331f8d3f3b82d198bd6551f0a039b3e945643ef9fa0dcaf8d817aaee089895/hostname",
	        "HostsPath": "/var/lib/docker/containers/85331f8d3f3b82d198bd6551f0a039b3e945643ef9fa0dcaf8d817aaee089895/hosts",
	        "LogPath": "/var/lib/docker/containers/85331f8d3f3b82d198bd6551f0a039b3e945643ef9fa0dcaf8d817aaee089895/85331f8d3f3b82d198bd6551f0a039b3e945643ef9fa0dcaf8d817aaee089895-json.log",
	        "Name": "/ingress-addon-legacy-795879",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-795879:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ingress-addon-legacy-795879",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/051f93a70dfce09f78171e9cbfaf8455f1dc1f1c71611bcbfefaf8e7a65f0c86-init/diff:/var/lib/docker/overlay2/d8b40fcaabfbbb6eb20cfe7c35f752b4babaa96b29803507d5f63d9939e9e0f0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/051f93a70dfce09f78171e9cbfaf8455f1dc1f1c71611bcbfefaf8e7a65f0c86/merged",
	                "UpperDir": "/var/lib/docker/overlay2/051f93a70dfce09f78171e9cbfaf8455f1dc1f1c71611bcbfefaf8e7a65f0c86/diff",
	                "WorkDir": "/var/lib/docker/overlay2/051f93a70dfce09f78171e9cbfaf8455f1dc1f1c71611bcbfefaf8e7a65f0c86/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-795879",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-795879/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-795879",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-795879",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-795879",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "577f695f4a809494c9d517f88ecfc5521ee368479cc4a63ce570df9f52746fff",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/577f695f4a80",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-795879": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "85331f8d3f3b",
	                        "ingress-addon-legacy-795879"
	                    ],
	                    "NetworkID": "5f168992f6db7bed16d1ce52323a9a82bcb2c1dfac072296dff2de7437f9e8a4",
	                    "EndpointID": "1c740a1d6009bfac04df8615f25de6085b14d6bf23d2677e6a62dfca04fdd5f6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-795879 -n ingress-addon-legacy-795879
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-795879 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-795879 logs -n 25: (1.061077115s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| cache   | functional-387153 cache reload                                           | functional-387153           | jenkins | v1.30.1 | 17 Jul 23 18:52 UTC | 17 Jul 23 18:52 UTC |
	| ssh     | functional-387153 ssh                                                    | functional-387153           | jenkins | v1.30.1 | 17 Jul 23 18:52 UTC | 17 Jul 23 18:52 UTC |
	|         | sudo crictl inspecti                                                     |                             |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                             |         |         |                     |                     |
	| cache   | delete                                                                   | minikube                    | jenkins | v1.30.1 | 17 Jul 23 18:52 UTC | 17 Jul 23 18:52 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                             |         |         |                     |                     |
	| cache   | delete                                                                   | minikube                    | jenkins | v1.30.1 | 17 Jul 23 18:52 UTC | 17 Jul 23 18:52 UTC |
	|         | registry.k8s.io/pause:latest                                             |                             |         |         |                     |                     |
	| kubectl | functional-387153 kubectl --                                             | functional-387153           | jenkins | v1.30.1 | 17 Jul 23 18:52 UTC | 17 Jul 23 18:52 UTC |
	|         | --context functional-387153                                              |                             |         |         |                     |                     |
	|         | get pods                                                                 |                             |         |         |                     |                     |
	| start   | -p functional-387153                                                     | functional-387153           | jenkins | v1.30.1 | 17 Jul 23 18:52 UTC | 17 Jul 23 18:53 UTC |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                             |         |         |                     |                     |
	|         | --wait=all                                                               |                             |         |         |                     |                     |
	| service | invalid-svc -p                                                           | functional-387153           | jenkins | v1.30.1 | 17 Jul 23 18:53 UTC |                     |
	|         | functional-387153                                                        |                             |         |         |                     |                     |
	| ssh     | functional-387153 ssh echo                                               | functional-387153           | jenkins | v1.30.1 | 17 Jul 23 18:53 UTC | 17 Jul 23 18:53 UTC |
	|         | hello                                                                    |                             |         |         |                     |                     |
	| config  | functional-387153 config unset                                           | functional-387153           | jenkins | v1.30.1 | 17 Jul 23 18:53 UTC | 17 Jul 23 18:53 UTC |
	|         | cpus                                                                     |                             |         |         |                     |                     |
	| config  | functional-387153 config get                                             | functional-387153           | jenkins | v1.30.1 | 17 Jul 23 18:53 UTC |                     |
	|         | cpus                                                                     |                             |         |         |                     |                     |
	| config  | functional-387153 config set                                             | functional-387153           | jenkins | v1.30.1 | 17 Jul 23 18:53 UTC | 17 Jul 23 18:53 UTC |
	|         | cpus 2                                                                   |                             |         |         |                     |                     |
	| config  | functional-387153 config get                                             | functional-387153           | jenkins | v1.30.1 | 17 Jul 23 18:53 UTC | 17 Jul 23 18:53 UTC |
	|         | cpus                                                                     |                             |         |         |                     |                     |
	| config  | functional-387153 config unset                                           | functional-387153           | jenkins | v1.30.1 | 17 Jul 23 18:53 UTC | 17 Jul 23 18:53 UTC |
	|         | cpus                                                                     |                             |         |         |                     |                     |
	| image   | functional-387153                                                        | functional-387153           | jenkins | v1.30.1 | 17 Jul 23 18:54 UTC | 17 Jul 23 18:54 UTC |
	|         | image ls --format json                                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                             |         |         |                     |                     |
	| image   | functional-387153                                                        | functional-387153           | jenkins | v1.30.1 | 17 Jul 23 18:54 UTC | 17 Jul 23 18:54 UTC |
	|         | image ls --format table                                                  |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                             |         |         |                     |                     |
	| image   | functional-387153 image build -t                                         | functional-387153           | jenkins | v1.30.1 | 17 Jul 23 18:54 UTC | 17 Jul 23 18:54 UTC |
	|         | localhost/my-image:functional-387153                                     |                             |         |         |                     |                     |
	|         | testdata/build --alsologtostderr                                         |                             |         |         |                     |                     |
	| image   | functional-387153 image ls                                               | functional-387153           | jenkins | v1.30.1 | 17 Jul 23 18:54 UTC | 17 Jul 23 18:54 UTC |
	| delete  | -p functional-387153                                                     | functional-387153           | jenkins | v1.30.1 | 17 Jul 23 18:54 UTC | 17 Jul 23 18:54 UTC |
	| start   | -p ingress-addon-legacy-795879                                           | ingress-addon-legacy-795879 | jenkins | v1.30.1 | 17 Jul 23 18:54 UTC | 17 Jul 23 18:55 UTC |
	|         | --kubernetes-version=v1.18.20                                            |                             |         |         |                     |                     |
	|         | --memory=4096 --wait=true                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                             |         |         |                     |                     |
	|         | -v=5 --driver=docker                                                     |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                                 |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-795879                                              | ingress-addon-legacy-795879 | jenkins | v1.30.1 | 17 Jul 23 18:55 UTC | 17 Jul 23 18:55 UTC |
	|         | addons enable ingress                                                    |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                   |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-795879                                              | ingress-addon-legacy-795879 | jenkins | v1.30.1 | 17 Jul 23 18:55 UTC | 17 Jul 23 18:55 UTC |
	|         | addons enable ingress-dns                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                   |                             |         |         |                     |                     |
	| ssh     | ingress-addon-legacy-795879                                              | ingress-addon-legacy-795879 | jenkins | v1.30.1 | 17 Jul 23 18:56 UTC |                     |
	|         | ssh curl -s http://127.0.0.1/                                            |                             |         |         |                     |                     |
	|         | -H 'Host: nginx.example.com'                                             |                             |         |         |                     |                     |
	| ip      | ingress-addon-legacy-795879 ip                                           | ingress-addon-legacy-795879 | jenkins | v1.30.1 | 17 Jul 23 18:58 UTC | 17 Jul 23 18:58 UTC |
	| addons  | ingress-addon-legacy-795879                                              | ingress-addon-legacy-795879 | jenkins | v1.30.1 | 17 Jul 23 18:58 UTC | 17 Jul 23 18:58 UTC |
	|         | addons disable ingress-dns                                               |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-795879                                              | ingress-addon-legacy-795879 | jenkins | v1.30.1 | 17 Jul 23 18:58 UTC | 17 Jul 23 18:58 UTC |
	|         | addons disable ingress                                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                             |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 18:54:25
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 18:54:25.746469  182797 out.go:296] Setting OutFile to fd 1 ...
	I0717 18:54:25.746594  182797 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 18:54:25.746604  182797 out.go:309] Setting ErrFile to fd 2...
	I0717 18:54:25.746608  182797 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 18:54:25.746813  182797 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-138069/.minikube/bin
	I0717 18:54:25.747425  182797 out.go:303] Setting JSON to false
	I0717 18:54:25.748717  182797 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":13017,"bootTime":1689607049,"procs":474,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:54:25.748782  182797 start.go:138] virtualization: kvm guest
	I0717 18:54:25.752623  182797 out.go:177] * [ingress-addon-legacy-795879] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 18:54:25.754351  182797 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 18:54:25.754351  182797 notify.go:220] Checking for updates...
	I0717 18:54:25.756118  182797 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:54:25.757684  182797 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-138069/kubeconfig
	I0717 18:54:25.759331  182797 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-138069/.minikube
	I0717 18:54:25.761599  182797 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 18:54:25.763330  182797 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 18:54:25.764989  182797 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 18:54:25.786683  182797 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 18:54:25.786761  182797 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 18:54:25.841051  182797 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:37 SystemTime:2023-07-17 18:54:25.831809218 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 18:54:25.841194  182797 docker.go:294] overlay module found
	I0717 18:54:25.843398  182797 out.go:177] * Using the docker driver based on user configuration
	I0717 18:54:25.845009  182797 start.go:298] selected driver: docker
	I0717 18:54:25.845024  182797 start.go:880] validating driver "docker" against <nil>
	I0717 18:54:25.845036  182797 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 18:54:25.845754  182797 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 18:54:25.903164  182797 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:37 SystemTime:2023-07-17 18:54:25.895029371 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 18:54:25.903328  182797 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 18:54:25.903552  182797 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 18:54:25.905916  182797 out.go:177] * Using Docker driver with root privileges
	I0717 18:54:25.907499  182797 cni.go:84] Creating CNI manager for ""
	I0717 18:54:25.907524  182797 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 18:54:25.907535  182797 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 18:54:25.907551  182797 start_flags.go:319] config:
	{Name:ingress-addon-legacy-795879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-795879 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 18:54:25.909348  182797 out.go:177] * Starting control plane node ingress-addon-legacy-795879 in cluster ingress-addon-legacy-795879
	I0717 18:54:25.910828  182797 cache.go:122] Beginning downloading kic base image for docker with crio
	I0717 18:54:25.912356  182797 out.go:177] * Pulling base image ...
	I0717 18:54:25.913838  182797 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0717 18:54:25.913943  182797 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 18:54:25.929802  182797 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 18:54:25.929836  182797 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0717 18:54:25.938130  182797 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0717 18:54:25.938172  182797 cache.go:57] Caching tarball of preloaded images
	I0717 18:54:25.938326  182797 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0717 18:54:25.940552  182797 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0717 18:54:25.942290  182797 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0717 18:54:25.971208  182797 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/16890-138069/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0717 18:54:29.056906  182797 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0717 18:54:29.057009  182797 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16890-138069/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0717 18:54:30.017838  182797 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0717 18:54:30.018225  182797 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/config.json ...
	I0717 18:54:30.018261  182797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/config.json: {Name:mk6943a20f24cb929d26f38861f25198d7a7e343 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:54:30.018445  182797 cache.go:195] Successfully downloaded all kic artifacts
	I0717 18:54:30.018468  182797 start.go:365] acquiring machines lock for ingress-addon-legacy-795879: {Name:mke397234fcc3e25da6bc74fc2e44f873b29bf04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 18:54:30.018504  182797 start.go:369] acquired machines lock for "ingress-addon-legacy-795879" in 27.257µs
	I0717 18:54:30.018529  182797 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-795879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-795879 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:54:30.018598  182797 start.go:125] createHost starting for "" (driver="docker")
	I0717 18:54:30.021419  182797 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0717 18:54:30.021721  182797 start.go:159] libmachine.API.Create for "ingress-addon-legacy-795879" (driver="docker")
	I0717 18:54:30.021757  182797 client.go:168] LocalClient.Create starting
	I0717 18:54:30.021822  182797 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem
	I0717 18:54:30.021855  182797 main.go:141] libmachine: Decoding PEM data...
	I0717 18:54:30.021875  182797 main.go:141] libmachine: Parsing certificate...
	I0717 18:54:30.021926  182797 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem
	I0717 18:54:30.021946  182797 main.go:141] libmachine: Decoding PEM data...
	I0717 18:54:30.021956  182797 main.go:141] libmachine: Parsing certificate...
	I0717 18:54:30.022243  182797 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-795879 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 18:54:30.038190  182797 cli_runner.go:211] docker network inspect ingress-addon-legacy-795879 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 18:54:30.038279  182797 network_create.go:281] running [docker network inspect ingress-addon-legacy-795879] to gather additional debugging logs...
	I0717 18:54:30.038302  182797 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-795879
	W0717 18:54:30.054614  182797 cli_runner.go:211] docker network inspect ingress-addon-legacy-795879 returned with exit code 1
	I0717 18:54:30.054660  182797 network_create.go:284] error running [docker network inspect ingress-addon-legacy-795879]: docker network inspect ingress-addon-legacy-795879: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-795879 not found
	I0717 18:54:30.054685  182797 network_create.go:286] output of [docker network inspect ingress-addon-legacy-795879]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-795879 not found
	
	** /stderr **
	I0717 18:54:30.054755  182797 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 18:54:30.071928  182797 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000d4e540}
	I0717 18:54:30.071993  182797 network_create.go:123] attempt to create docker network ingress-addon-legacy-795879 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0717 18:54:30.072048  182797 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-795879 ingress-addon-legacy-795879
	I0717 18:54:30.125441  182797 network_create.go:107] docker network ingress-addon-legacy-795879 192.168.49.0/24 created
	I0717 18:54:30.125480  182797 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-795879" container
	I0717 18:54:30.125544  182797 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 18:54:30.142048  182797 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-795879 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-795879 --label created_by.minikube.sigs.k8s.io=true
	I0717 18:54:30.159402  182797 oci.go:103] Successfully created a docker volume ingress-addon-legacy-795879
	I0717 18:54:30.159491  182797 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-795879-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-795879 --entrypoint /usr/bin/test -v ingress-addon-legacy-795879:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0717 18:54:31.925060  182797 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-795879-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-795879 --entrypoint /usr/bin/test -v ingress-addon-legacy-795879:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib: (1.76551773s)
	I0717 18:54:31.925095  182797 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-795879
	I0717 18:54:31.925129  182797 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0717 18:54:31.925150  182797 kic.go:190] Starting extracting preloaded images to volume ...
	I0717 18:54:31.925206  182797 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16890-138069/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-795879:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0717 18:54:37.236530  182797 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16890-138069/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-795879:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (5.311241644s)
	I0717 18:54:37.236564  182797 kic.go:199] duration metric: took 5.311410 seconds to extract preloaded images to volume
	W0717 18:54:37.236693  182797 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0717 18:54:37.236781  182797 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 18:54:37.289774  182797 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-795879 --name ingress-addon-legacy-795879 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-795879 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-795879 --network ingress-addon-legacy-795879 --ip 192.168.49.2 --volume ingress-addon-legacy-795879:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 18:54:37.595600  182797 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-795879 --format={{.State.Running}}
	I0717 18:54:37.614902  182797 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-795879 --format={{.State.Status}}
	I0717 18:54:37.631722  182797 cli_runner.go:164] Run: docker exec ingress-addon-legacy-795879 stat /var/lib/dpkg/alternatives/iptables
	I0717 18:54:37.687916  182797 oci.go:144] the created container "ingress-addon-legacy-795879" has a running status.
	I0717 18:54:37.687950  182797 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16890-138069/.minikube/machines/ingress-addon-legacy-795879/id_rsa...
	I0717 18:54:37.790391  182797 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/machines/ingress-addon-legacy-795879/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0717 18:54:37.790433  182797 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16890-138069/.minikube/machines/ingress-addon-legacy-795879/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 18:54:37.809119  182797 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-795879 --format={{.State.Status}}
	I0717 18:54:37.825689  182797 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 18:54:37.825717  182797 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-795879 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 18:54:37.896479  182797 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-795879 --format={{.State.Status}}
	I0717 18:54:37.912616  182797 machine.go:88] provisioning docker machine ...
	I0717 18:54:37.912674  182797 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-795879"
	I0717 18:54:37.912728  182797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-795879
	I0717 18:54:37.931290  182797 main.go:141] libmachine: Using SSH client type: native
	I0717 18:54:37.931934  182797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0717 18:54:37.931957  182797 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-795879 && echo "ingress-addon-legacy-795879" | sudo tee /etc/hostname
	I0717 18:54:37.932748  182797 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0717 18:54:41.066934  182797 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-795879
	
	I0717 18:54:41.067008  182797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-795879
	I0717 18:54:41.083456  182797 main.go:141] libmachine: Using SSH client type: native
	I0717 18:54:41.083899  182797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0717 18:54:41.083935  182797 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-795879' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-795879/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-795879' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 18:54:41.208018  182797 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 18:54:41.208061  182797 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16890-138069/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-138069/.minikube}
	I0717 18:54:41.208088  182797 ubuntu.go:177] setting up certificates
	I0717 18:54:41.208101  182797 provision.go:83] configureAuth start
	I0717 18:54:41.208159  182797 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-795879
	I0717 18:54:41.224347  182797 provision.go:138] copyHostCerts
	I0717 18:54:41.224389  182797 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16890-138069/.minikube/ca.pem
	I0717 18:54:41.224432  182797 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-138069/.minikube/ca.pem, removing ...
	I0717 18:54:41.224442  182797 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-138069/.minikube/ca.pem
	I0717 18:54:41.224520  182797 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-138069/.minikube/ca.pem (1078 bytes)
	I0717 18:54:41.224611  182797 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16890-138069/.minikube/cert.pem
	I0717 18:54:41.224636  182797 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-138069/.minikube/cert.pem, removing ...
	I0717 18:54:41.224644  182797 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-138069/.minikube/cert.pem
	I0717 18:54:41.224680  182797 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-138069/.minikube/cert.pem (1123 bytes)
	I0717 18:54:41.224759  182797 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16890-138069/.minikube/key.pem
	I0717 18:54:41.224784  182797 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-138069/.minikube/key.pem, removing ...
	I0717 18:54:41.224793  182797 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-138069/.minikube/key.pem
	I0717 18:54:41.224825  182797 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-138069/.minikube/key.pem (1675 bytes)
	I0717 18:54:41.224902  182797 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-795879 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-795879]
	I0717 18:54:41.345544  182797 provision.go:172] copyRemoteCerts
	I0717 18:54:41.345607  182797 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 18:54:41.345649  182797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-795879
	I0717 18:54:41.362633  182797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/ingress-addon-legacy-795879/id_rsa Username:docker}
	I0717 18:54:41.452710  182797 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 18:54:41.452778  182797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 18:54:41.473847  182797 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 18:54:41.473913  182797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0717 18:54:41.495294  182797 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 18:54:41.495368  182797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 18:54:41.516433  182797 provision.go:86] duration metric: configureAuth took 308.320311ms
	I0717 18:54:41.516459  182797 ubuntu.go:193] setting minikube options for container-runtime
	I0717 18:54:41.516611  182797 config.go:182] Loaded profile config "ingress-addon-legacy-795879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0717 18:54:41.516710  182797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-795879
	I0717 18:54:41.532250  182797 main.go:141] libmachine: Using SSH client type: native
	I0717 18:54:41.532661  182797 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0717 18:54:41.532678  182797 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 18:54:41.767867  182797 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 18:54:41.767907  182797 machine.go:91] provisioned docker machine in 3.855266287s
	I0717 18:54:41.767918  182797 client.go:171] LocalClient.Create took 11.746155535s
	I0717 18:54:41.767943  182797 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-795879" took 11.746221531s
	I0717 18:54:41.767954  182797 start.go:300] post-start starting for "ingress-addon-legacy-795879" (driver="docker")
	I0717 18:54:41.767970  182797 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 18:54:41.768085  182797 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 18:54:41.768143  182797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-795879
	I0717 18:54:41.784362  182797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/ingress-addon-legacy-795879/id_rsa Username:docker}
	I0717 18:54:41.876728  182797 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 18:54:41.880004  182797 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 18:54:41.880044  182797 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 18:54:41.880053  182797 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 18:54:41.880059  182797 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 18:54:41.880072  182797 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-138069/.minikube/addons for local assets ...
	I0717 18:54:41.880123  182797 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-138069/.minikube/files for local assets ...
	I0717 18:54:41.880212  182797 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem -> 1448222.pem in /etc/ssl/certs
	I0717 18:54:41.880222  182797 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem -> /etc/ssl/certs/1448222.pem
	I0717 18:54:41.880304  182797 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 18:54:41.887989  182797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem --> /etc/ssl/certs/1448222.pem (1708 bytes)
	I0717 18:54:41.909354  182797 start.go:303] post-start completed in 141.378884ms
	I0717 18:54:41.909715  182797 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-795879
	I0717 18:54:41.926648  182797 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/config.json ...
	I0717 18:54:41.926907  182797 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 18:54:41.926955  182797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-795879
	I0717 18:54:41.944142  182797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/ingress-addon-legacy-795879/id_rsa Username:docker}
	I0717 18:54:42.032904  182797 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 18:54:42.037085  182797 start.go:128] duration metric: createHost completed in 12.018467539s
	I0717 18:54:42.037104  182797 start.go:83] releasing machines lock for "ingress-addon-legacy-795879", held for 12.018585456s
	I0717 18:54:42.037166  182797 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-795879
	I0717 18:54:42.053438  182797 ssh_runner.go:195] Run: cat /version.json
	I0717 18:54:42.053500  182797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-795879
	I0717 18:54:42.053520  182797 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 18:54:42.053580  182797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-795879
	I0717 18:54:42.069794  182797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/ingress-addon-legacy-795879/id_rsa Username:docker}
	I0717 18:54:42.070849  182797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/ingress-addon-legacy-795879/id_rsa Username:docker}
	W0717 18:54:42.241226  182797 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 18:54:42.241299  182797 ssh_runner.go:195] Run: systemctl --version
	I0717 18:54:42.245536  182797 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 18:54:42.383324  182797 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 18:54:42.387613  182797 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:54:42.404843  182797 cni.go:227] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0717 18:54:42.404934  182797 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 18:54:42.430647  182797 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0717 18:54:42.430676  182797 start.go:469] detecting cgroup driver to use...
	I0717 18:54:42.430708  182797 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 18:54:42.430761  182797 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 18:54:42.444403  182797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 18:54:42.454272  182797 docker.go:196] disabling cri-docker service (if available) ...
	I0717 18:54:42.454320  182797 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 18:54:42.466196  182797 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 18:54:42.478703  182797 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 18:54:42.554739  182797 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 18:54:42.630295  182797 docker.go:212] disabling docker service ...
	I0717 18:54:42.630363  182797 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 18:54:42.648674  182797 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 18:54:42.659575  182797 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 18:54:42.735438  182797 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 18:54:42.819605  182797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 18:54:42.830714  182797 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 18:54:42.845466  182797 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 18:54:42.845529  182797 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:54:42.854679  182797 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 18:54:42.854738  182797 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:54:42.864025  182797 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:54:42.873794  182797 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 18:54:42.883187  182797 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 18:54:42.892319  182797 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 18:54:42.900282  182797 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 18:54:42.908206  182797 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 18:54:42.982195  182797 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 18:54:43.094096  182797 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 18:54:43.094172  182797 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 18:54:43.097770  182797 start.go:537] Will wait 60s for crictl version
	I0717 18:54:43.097830  182797 ssh_runner.go:195] Run: which crictl
	I0717 18:54:43.101133  182797 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 18:54:43.133926  182797 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0717 18:54:43.134001  182797 ssh_runner.go:195] Run: crio --version
	I0717 18:54:43.168790  182797 ssh_runner.go:195] Run: crio --version
	I0717 18:54:43.207067  182797 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I0717 18:54:43.208974  182797 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-795879 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 18:54:43.225803  182797 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0717 18:54:43.229683  182797 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:54:43.240869  182797 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0717 18:54:43.240930  182797 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:54:43.287494  182797 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0717 18:54:43.287562  182797 ssh_runner.go:195] Run: which lz4
	I0717 18:54:43.291156  182797 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0717 18:54:43.291263  182797 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 18:54:43.294736  182797 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 18:54:43.294792  182797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0717 18:54:44.252262  182797 crio.go:444] Took 0.961028 seconds to copy over tarball
	I0717 18:54:44.252325  182797 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 18:54:46.512943  182797 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.260588464s)
	I0717 18:54:46.512975  182797 crio.go:451] Took 2.260685 seconds to extract the tarball
	I0717 18:54:46.512988  182797 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 18:54:46.582476  182797 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 18:54:46.613760  182797 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0717 18:54:46.613784  182797 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 18:54:46.613851  182797 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:54:46.613866  182797 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0717 18:54:46.613894  182797 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0717 18:54:46.613959  182797 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0717 18:54:46.614019  182797 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 18:54:46.613898  182797 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0717 18:54:46.614041  182797 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0717 18:54:46.613867  182797 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 18:54:46.615609  182797 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0717 18:54:46.615622  182797 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0717 18:54:46.615686  182797 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:54:46.616019  182797 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0717 18:54:46.616144  182797 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 18:54:46.616210  182797 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0717 18:54:46.616533  182797 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0717 18:54:46.618384  182797 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 18:54:46.789006  182797 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0717 18:54:46.796629  182797 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0717 18:54:46.796685  182797 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0717 18:54:46.800929  182797 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0717 18:54:46.802233  182797 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0717 18:54:46.814217  182797 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 18:54:46.825123  182797 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0717 18:54:46.877445  182797 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0717 18:54:46.877562  182797 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0717 18:54:46.877634  182797 ssh_runner.go:195] Run: which crictl
	I0717 18:54:46.884766  182797 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0717 18:54:46.884988  182797 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0717 18:54:46.884945  182797 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0717 18:54:46.885029  182797 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0717 18:54:46.885037  182797 ssh_runner.go:195] Run: which crictl
	I0717 18:54:46.885075  182797 ssh_runner.go:195] Run: which crictl
	I0717 18:54:46.886731  182797 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0717 18:54:46.886758  182797 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0717 18:54:46.886770  182797 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0717 18:54:46.886783  182797 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0717 18:54:46.886806  182797 ssh_runner.go:195] Run: which crictl
	I0717 18:54:46.886811  182797 ssh_runner.go:195] Run: which crictl
	I0717 18:54:46.893147  182797 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0717 18:54:46.893177  182797 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 18:54:46.893212  182797 ssh_runner.go:195] Run: which crictl
	I0717 18:54:46.911661  182797 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:54:46.969916  182797 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0717 18:54:46.969965  182797 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0717 18:54:46.969980  182797 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0717 18:54:46.970001  182797 ssh_runner.go:195] Run: which crictl
	I0717 18:54:46.970091  182797 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0717 18:54:46.970103  182797 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0717 18:54:46.970207  182797 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0717 18:54:46.970251  182797 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0717 18:54:46.970303  182797 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 18:54:47.270499  182797 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0717 18:54:47.270578  182797 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0717 18:54:47.270642  182797 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0717 18:54:47.270719  182797 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0717 18:54:47.270729  182797 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0717 18:54:47.270805  182797 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0717 18:54:47.270884  182797 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0717 18:54:47.302232  182797 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0717 18:54:47.302284  182797 cache_images.go:92] LoadImages completed in 688.487219ms
	W0717 18:54:47.302378  182797 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20: no such file or directory
	I0717 18:54:47.302461  182797 ssh_runner.go:195] Run: crio config
	I0717 18:54:47.346057  182797 cni.go:84] Creating CNI manager for ""
	I0717 18:54:47.346083  182797 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 18:54:47.346103  182797 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 18:54:47.346127  182797 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-795879 NodeName:ingress-addon-legacy-795879 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 18:54:47.346344  182797 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-795879"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 18:54:47.346459  182797 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-795879 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-795879 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 18:54:47.346530  182797 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0717 18:54:47.354877  182797 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 18:54:47.354952  182797 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 18:54:47.362992  182797 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0717 18:54:47.379092  182797 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0717 18:54:47.395128  182797 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0717 18:54:47.411097  182797 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0717 18:54:47.414375  182797 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 18:54:47.424117  182797 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879 for IP: 192.168.49.2
	I0717 18:54:47.424167  182797 certs.go:190] acquiring lock for shared ca certs: {Name:mk42196ce59710ebf500640671660e2f4656c84e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:54:47.424364  182797 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16890-138069/.minikube/ca.key
	I0717 18:54:47.424445  182797 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16890-138069/.minikube/proxy-client-ca.key
	I0717 18:54:47.424496  182797 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/client.key
	I0717 18:54:47.424508  182797 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/client.crt with IP's: []
	I0717 18:54:47.510756  182797 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/client.crt ...
	I0717 18:54:47.510791  182797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/client.crt: {Name:mked511880c9f367c9e9a6cb77ce8f85a1f049b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:54:47.511015  182797 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/client.key ...
	I0717 18:54:47.511030  182797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/client.key: {Name:mk57e7bca217e86d506ce4f57f488fcdf507a8fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:54:47.511144  182797 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/apiserver.key.dd3b5fb2
	I0717 18:54:47.511161  182797 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0717 18:54:47.660374  182797 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/apiserver.crt.dd3b5fb2 ...
	I0717 18:54:47.660409  182797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/apiserver.crt.dd3b5fb2: {Name:mk62594f56c4cd2f3e88a9b6398862b4459dd4ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:54:47.660606  182797 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/apiserver.key.dd3b5fb2 ...
	I0717 18:54:47.660623  182797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/apiserver.key.dd3b5fb2: {Name:mke234c8fe35418fafaf2bdf94268a734039e7aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:54:47.660727  182797 certs.go:337] copying /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/apiserver.crt
	I0717 18:54:47.660795  182797 certs.go:341] copying /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/apiserver.key
	I0717 18:54:47.660845  182797 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/proxy-client.key
	I0717 18:54:47.660859  182797 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/proxy-client.crt with IP's: []
	I0717 18:54:47.738794  182797 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/proxy-client.crt ...
	I0717 18:54:47.738824  182797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/proxy-client.crt: {Name:mk6afcb54c0b9fe129990f0d5613f8b72e104801 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:54:47.739015  182797 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/proxy-client.key ...
	I0717 18:54:47.739031  182797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/proxy-client.key: {Name:mk4ea805aa2d44ae0793959b0a4272eefae583e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:54:47.739133  182797 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 18:54:47.739156  182797 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 18:54:47.739169  182797 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 18:54:47.739181  182797 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 18:54:47.739203  182797 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 18:54:47.739220  182797 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 18:54:47.739235  182797 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 18:54:47.739247  182797 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 18:54:47.739304  182797 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/144822.pem (1338 bytes)
	W0717 18:54:47.739341  182797 certs.go:433] ignoring /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/144822_empty.pem, impossibly tiny 0 bytes
	I0717 18:54:47.739352  182797 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 18:54:47.739375  182797 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem (1078 bytes)
	I0717 18:54:47.739398  182797 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem (1123 bytes)
	I0717 18:54:47.739419  182797 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/key.pem (1675 bytes)
	I0717 18:54:47.739458  182797 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem (1708 bytes)
	I0717 18:54:47.739492  182797 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:54:47.739507  182797 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/144822.pem -> /usr/share/ca-certificates/144822.pem
	I0717 18:54:47.739541  182797 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem -> /usr/share/ca-certificates/1448222.pem
	I0717 18:54:47.740235  182797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 18:54:47.763201  182797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 18:54:47.785397  182797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 18:54:47.807717  182797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 18:54:47.829326  182797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 18:54:47.851037  182797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 18:54:47.872405  182797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 18:54:47.893382  182797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 18:54:47.914848  182797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 18:54:47.936170  182797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/certs/144822.pem --> /usr/share/ca-certificates/144822.pem (1338 bytes)
	I0717 18:54:47.958054  182797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem --> /usr/share/ca-certificates/1448222.pem (1708 bytes)
	I0717 18:54:47.980386  182797 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 18:54:47.996907  182797 ssh_runner.go:195] Run: openssl version
	I0717 18:54:48.002333  182797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 18:54:48.011156  182797 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:54:48.014689  182797 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:46 /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:54:48.014758  182797 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 18:54:48.021312  182797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 18:54:48.030256  182797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144822.pem && ln -fs /usr/share/ca-certificates/144822.pem /etc/ssl/certs/144822.pem"
	I0717 18:54:48.039271  182797 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144822.pem
	I0717 18:54:48.042906  182797 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:51 /usr/share/ca-certificates/144822.pem
	I0717 18:54:48.042964  182797 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144822.pem
	I0717 18:54:48.049814  182797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144822.pem /etc/ssl/certs/51391683.0"
	I0717 18:54:48.059175  182797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1448222.pem && ln -fs /usr/share/ca-certificates/1448222.pem /etc/ssl/certs/1448222.pem"
	I0717 18:54:48.068567  182797 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1448222.pem
	I0717 18:54:48.072177  182797 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:51 /usr/share/ca-certificates/1448222.pem
	I0717 18:54:48.072253  182797 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1448222.pem
	I0717 18:54:48.078745  182797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1448222.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 18:54:48.087674  182797 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 18:54:48.091052  182797 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 18:54:48.091113  182797 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-795879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-795879 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMet
rics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 18:54:48.091213  182797 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 18:54:48.091269  182797 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 18:54:48.125608  182797 cri.go:89] found id: ""
	I0717 18:54:48.125683  182797 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 18:54:48.134265  182797 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 18:54:48.142900  182797 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0717 18:54:48.142975  182797 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 18:54:48.151111  182797 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 18:54:48.151158  182797 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 18:54:48.194944  182797 kubeadm.go:322] W0717 18:54:48.194293    1364 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0717 18:54:48.235407  182797 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1037-gcp\n", err: exit status 1
	I0717 18:54:48.306296  182797 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 18:54:51.027379  182797 kubeadm.go:322] W0717 18:54:51.026982    1364 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0717 18:54:51.028485  182797 kubeadm.go:322] W0717 18:54:51.028178    1364 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0717 18:54:59.487948  182797 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0717 18:54:59.488049  182797 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 18:54:59.488146  182797 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0717 18:54:59.488253  182797 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1037-gcp
	I0717 18:54:59.488333  182797 kubeadm.go:322] OS: Linux
	I0717 18:54:59.488421  182797 kubeadm.go:322] CGROUPS_CPU: enabled
	I0717 18:54:59.488506  182797 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0717 18:54:59.488579  182797 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0717 18:54:59.488672  182797 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0717 18:54:59.488736  182797 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0717 18:54:59.488797  182797 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0717 18:54:59.488860  182797 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 18:54:59.488956  182797 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 18:54:59.489071  182797 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 18:54:59.489188  182797 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 18:54:59.489305  182797 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 18:54:59.489358  182797 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 18:54:59.489438  182797 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 18:54:59.491502  182797 out.go:204]   - Generating certificates and keys ...
	I0717 18:54:59.491587  182797 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 18:54:59.491652  182797 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 18:54:59.491729  182797 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 18:54:59.491834  182797 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0717 18:54:59.491941  182797 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0717 18:54:59.492042  182797 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0717 18:54:59.492119  182797 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0717 18:54:59.492275  182797 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-795879 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 18:54:59.492323  182797 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0717 18:54:59.492508  182797 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-795879 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 18:54:59.492610  182797 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 18:54:59.492701  182797 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 18:54:59.492774  182797 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0717 18:54:59.492885  182797 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 18:54:59.492970  182797 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 18:54:59.493045  182797 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 18:54:59.493139  182797 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 18:54:59.493210  182797 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 18:54:59.493285  182797 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 18:54:59.494953  182797 out.go:204]   - Booting up control plane ...
	I0717 18:54:59.495037  182797 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 18:54:59.495106  182797 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 18:54:59.495174  182797 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 18:54:59.495283  182797 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 18:54:59.495471  182797 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 18:54:59.495574  182797 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.002449 seconds
	I0717 18:54:59.495737  182797 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 18:54:59.495993  182797 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 18:54:59.496055  182797 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 18:54:59.496174  182797 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-795879 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0717 18:54:59.496224  182797 kubeadm.go:322] [bootstrap-token] Using token: 8n0aw9.sp3cf3bo2xnzbp5l
	I0717 18:54:59.497790  182797 out.go:204]   - Configuring RBAC rules ...
	I0717 18:54:59.497892  182797 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 18:54:59.497998  182797 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 18:54:59.498164  182797 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 18:54:59.498315  182797 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 18:54:59.498496  182797 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 18:54:59.498612  182797 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 18:54:59.498716  182797 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 18:54:59.498780  182797 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 18:54:59.498821  182797 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 18:54:59.498826  182797 kubeadm.go:322] 
	I0717 18:54:59.498870  182797 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 18:54:59.498879  182797 kubeadm.go:322] 
	I0717 18:54:59.498939  182797 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 18:54:59.498945  182797 kubeadm.go:322] 
	I0717 18:54:59.498974  182797 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 18:54:59.499063  182797 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 18:54:59.499108  182797 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 18:54:59.499114  182797 kubeadm.go:322] 
	I0717 18:54:59.499162  182797 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 18:54:59.499229  182797 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 18:54:59.499308  182797 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 18:54:59.499318  182797 kubeadm.go:322] 
	I0717 18:54:59.499442  182797 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 18:54:59.499531  182797 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 18:54:59.499546  182797 kubeadm.go:322] 
	I0717 18:54:59.499649  182797 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 8n0aw9.sp3cf3bo2xnzbp5l \
	I0717 18:54:59.499802  182797 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:937c4239101ec8b12459e4fa3de0759350fbf81fa4f52752b966f06f42d7d7ec \
	I0717 18:54:59.499839  182797 kubeadm.go:322]     --control-plane 
	I0717 18:54:59.499846  182797 kubeadm.go:322] 
	I0717 18:54:59.499942  182797 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 18:54:59.499951  182797 kubeadm.go:322] 
	I0717 18:54:59.500067  182797 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 8n0aw9.sp3cf3bo2xnzbp5l \
	I0717 18:54:59.500223  182797 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:937c4239101ec8b12459e4fa3de0759350fbf81fa4f52752b966f06f42d7d7ec 
	I0717 18:54:59.500242  182797 cni.go:84] Creating CNI manager for ""
	I0717 18:54:59.500255  182797 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 18:54:59.501960  182797 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 18:54:59.503872  182797 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 18:54:59.507927  182797 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0717 18:54:59.507946  182797 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 18:54:59.525756  182797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 18:54:59.960042  182797 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 18:54:59.960077  182797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:54:59.960118  182797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5 minikube.k8s.io/name=ingress-addon-legacy-795879 minikube.k8s.io/updated_at=2023_07_17T18_54_59_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:55:00.081321  182797 ops.go:34] apiserver oom_adj: -16
	I0717 18:55:00.081519  182797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:55:00.649978  182797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:55:01.150224  182797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:55:01.650025  182797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:55:02.149355  182797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:55:02.650266  182797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:55:03.149311  182797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:55:03.650219  182797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:55:04.149377  182797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:55:04.649826  182797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:55:05.149985  182797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:55:05.649508  182797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:55:06.150156  182797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:55:06.650255  182797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:55:07.149429  182797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:55:07.649394  182797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:55:08.150066  182797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:55:08.649482  182797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:55:09.150110  182797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:55:09.649293  182797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:55:10.149259  182797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:55:10.650119  182797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:55:11.150233  182797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:55:11.650324  182797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:55:12.149557  182797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:55:12.650092  182797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:55:13.149553  182797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:55:13.650271  182797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:55:14.149586  182797 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 18:55:14.215050  182797 kubeadm.go:1081] duration metric: took 14.255024561s to wait for elevateKubeSystemPrivileges.
	I0717 18:55:14.215086  182797 kubeadm.go:406] StartCluster complete in 26.123977965s
	I0717 18:55:14.215106  182797 settings.go:142] acquiring lock: {Name:mk9765434b8f4871dd605367f6caa71617d51b6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:55:14.215182  182797 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16890-138069/kubeconfig
	I0717 18:55:14.215952  182797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-138069/kubeconfig: {Name:mkc53c034e0e90a78d013670a58d5882070a3e3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 18:55:14.216292  182797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 18:55:14.216397  182797 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 18:55:14.216514  182797 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-795879"
	I0717 18:55:14.216533  182797 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-795879"
	I0717 18:55:14.216547  182797 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-795879"
	I0717 18:55:14.216571  182797 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-795879"
	I0717 18:55:14.216594  182797 host.go:66] Checking if "ingress-addon-legacy-795879" exists ...
	I0717 18:55:14.216644  182797 config.go:182] Loaded profile config "ingress-addon-legacy-795879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0717 18:55:14.216982  182797 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-795879 --format={{.State.Status}}
	I0717 18:55:14.216913  182797 kapi.go:59] client config for ingress-addon-legacy-795879: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/client.key", CAFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 18:55:14.217154  182797 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-795879 --format={{.State.Status}}
	I0717 18:55:14.217861  182797 cert_rotation.go:137] Starting client certificate rotation controller
	I0717 18:55:14.238694  182797 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 18:55:14.238078  182797 kapi.go:59] client config for ingress-addon-legacy-795879: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/client.key", CAFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 18:55:14.240634  182797 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:55:14.240659  182797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 18:55:14.240723  182797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-795879
	I0717 18:55:14.241375  182797 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-795879"
	I0717 18:55:14.241411  182797 host.go:66] Checking if "ingress-addon-legacy-795879" exists ...
	I0717 18:55:14.241809  182797 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-795879 --format={{.State.Status}}
	I0717 18:55:14.259496  182797 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 18:55:14.259521  182797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 18:55:14.259592  182797 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-795879
	I0717 18:55:14.261869  182797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/ingress-addon-legacy-795879/id_rsa Username:docker}
	I0717 18:55:14.277399  182797 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/ingress-addon-legacy-795879/id_rsa Username:docker}
	I0717 18:55:14.297422  182797 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 18:55:14.381143  182797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 18:55:14.384928  182797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 18:55:14.679940  182797 start.go:917] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0717 18:55:14.771476  182797 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-795879" context rescaled to 1 replicas
	I0717 18:55:14.771529  182797 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 18:55:14.773821  182797 out.go:177] * Verifying Kubernetes components...
	I0717 18:55:14.775641  182797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:55:14.921745  182797 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0717 18:55:14.923450  182797 addons.go:502] enable addons completed in 707.053589ms: enabled=[storage-provisioner default-storageclass]
	I0717 18:55:14.920748  182797 kapi.go:59] client config for ingress-addon-legacy-795879: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/client.key", CAFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 18:55:14.923781  182797 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-795879" to be "Ready" ...
	I0717 18:55:16.972028  182797 node_ready.go:58] node "ingress-addon-legacy-795879" has status "Ready":"False"
	I0717 18:55:18.972095  182797 node_ready.go:58] node "ingress-addon-legacy-795879" has status "Ready":"False"
	I0717 18:55:20.136385  182797 node_ready.go:49] node "ingress-addon-legacy-795879" has status "Ready":"True"
	I0717 18:55:20.136411  182797 node_ready.go:38] duration metric: took 5.212603289s waiting for node "ingress-addon-legacy-795879" to be "Ready" ...
	I0717 18:55:20.136425  182797 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:55:20.264479  182797 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-x9prc" in "kube-system" namespace to be "Ready" ...
	I0717 18:55:22.325859  182797 pod_ready.go:102] pod "coredns-66bff467f8-x9prc" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-07-17 18:55:14 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize:}
	I0717 18:55:24.828199  182797 pod_ready.go:102] pod "coredns-66bff467f8-x9prc" in "kube-system" namespace has status "Ready":"False"
	I0717 18:55:27.328855  182797 pod_ready.go:102] pod "coredns-66bff467f8-x9prc" in "kube-system" namespace has status "Ready":"False"
	I0717 18:55:29.828908  182797 pod_ready.go:102] pod "coredns-66bff467f8-x9prc" in "kube-system" namespace has status "Ready":"False"
	I0717 18:55:31.828355  182797 pod_ready.go:92] pod "coredns-66bff467f8-x9prc" in "kube-system" namespace has status "Ready":"True"
	I0717 18:55:31.828380  182797 pod_ready.go:81] duration metric: took 11.563872934s waiting for pod "coredns-66bff467f8-x9prc" in "kube-system" namespace to be "Ready" ...
	I0717 18:55:31.828389  182797 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-795879" in "kube-system" namespace to be "Ready" ...
	I0717 18:55:31.832451  182797 pod_ready.go:92] pod "etcd-ingress-addon-legacy-795879" in "kube-system" namespace has status "Ready":"True"
	I0717 18:55:31.832476  182797 pod_ready.go:81] duration metric: took 4.079588ms waiting for pod "etcd-ingress-addon-legacy-795879" in "kube-system" namespace to be "Ready" ...
	I0717 18:55:31.832491  182797 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-795879" in "kube-system" namespace to be "Ready" ...
	I0717 18:55:31.836576  182797 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-795879" in "kube-system" namespace has status "Ready":"True"
	I0717 18:55:31.836597  182797 pod_ready.go:81] duration metric: took 4.098766ms waiting for pod "kube-apiserver-ingress-addon-legacy-795879" in "kube-system" namespace to be "Ready" ...
	I0717 18:55:31.836606  182797 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-795879" in "kube-system" namespace to be "Ready" ...
	I0717 18:55:31.840424  182797 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-795879" in "kube-system" namespace has status "Ready":"True"
	I0717 18:55:31.840444  182797 pod_ready.go:81] duration metric: took 3.832199ms waiting for pod "kube-controller-manager-ingress-addon-legacy-795879" in "kube-system" namespace to be "Ready" ...
	I0717 18:55:31.840457  182797 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-klr5w" in "kube-system" namespace to be "Ready" ...
	I0717 18:55:31.844312  182797 pod_ready.go:92] pod "kube-proxy-klr5w" in "kube-system" namespace has status "Ready":"True"
	I0717 18:55:31.844335  182797 pod_ready.go:81] duration metric: took 3.871317ms waiting for pod "kube-proxy-klr5w" in "kube-system" namespace to be "Ready" ...
	I0717 18:55:31.844347  182797 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-795879" in "kube-system" namespace to be "Ready" ...
	I0717 18:55:32.023786  182797 request.go:628] Waited for 179.353612ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-795879
	I0717 18:55:32.224038  182797 request.go:628] Waited for 197.369597ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-795879
	I0717 18:55:32.226935  182797 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-795879" in "kube-system" namespace has status "Ready":"True"
	I0717 18:55:32.226963  182797 pod_ready.go:81] duration metric: took 382.60713ms waiting for pod "kube-scheduler-ingress-addon-legacy-795879" in "kube-system" namespace to be "Ready" ...
	I0717 18:55:32.226979  182797 pod_ready.go:38] duration metric: took 12.090542203s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 18:55:32.227000  182797 api_server.go:52] waiting for apiserver process to appear ...
	I0717 18:55:32.227071  182797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 18:55:32.237754  182797 api_server.go:72] duration metric: took 17.466148704s to wait for apiserver process to appear ...
	I0717 18:55:32.237777  182797 api_server.go:88] waiting for apiserver healthz status ...
	I0717 18:55:32.237795  182797 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0717 18:55:32.242703  182797 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0717 18:55:32.243511  182797 api_server.go:141] control plane version: v1.18.20
	I0717 18:55:32.243534  182797 api_server.go:131] duration metric: took 5.751045ms to wait for apiserver health ...
	I0717 18:55:32.243541  182797 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 18:55:32.423985  182797 request.go:628] Waited for 180.359747ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0717 18:55:32.429409  182797 system_pods.go:59] 8 kube-system pods found
	I0717 18:55:32.429441  182797 system_pods.go:61] "coredns-66bff467f8-x9prc" [9c491755-c38c-4e53-9609-a882488493a2] Running
	I0717 18:55:32.429447  182797 system_pods.go:61] "etcd-ingress-addon-legacy-795879" [539ad4e9-1366-4aa6-bbe8-be2bda66eb79] Running
	I0717 18:55:32.429451  182797 system_pods.go:61] "kindnet-qhnfh" [5c1b5f79-7e0e-4b81-915c-6e9f93b8ec63] Running
	I0717 18:55:32.429457  182797 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-795879" [846d70d1-f129-4ceb-a7e7-10faefffde84] Running
	I0717 18:55:32.429461  182797 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-795879" [314f9440-ece4-4f6d-8d81-7c99ec44a0bc] Running
	I0717 18:55:32.429465  182797 system_pods.go:61] "kube-proxy-klr5w" [106879bf-24bd-4276-af96-c59bd399defb] Running
	I0717 18:55:32.429469  182797 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-795879" [84dd2534-511f-4ce5-b208-2f987c37fa97] Running
	I0717 18:55:32.429475  182797 system_pods.go:61] "storage-provisioner" [ebab4aff-16f6-4298-9319-476443ae9620] Running
	I0717 18:55:32.429483  182797 system_pods.go:74] duration metric: took 185.936723ms to wait for pod list to return data ...
	I0717 18:55:32.429493  182797 default_sa.go:34] waiting for default service account to be created ...
	I0717 18:55:32.623943  182797 request.go:628] Waited for 194.369292ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0717 18:55:32.626670  182797 default_sa.go:45] found service account: "default"
	I0717 18:55:32.626696  182797 default_sa.go:55] duration metric: took 197.199426ms for default service account to be created ...
	I0717 18:55:32.626705  182797 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 18:55:32.824174  182797 request.go:628] Waited for 197.339414ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0717 18:55:32.829305  182797 system_pods.go:86] 8 kube-system pods found
	I0717 18:55:32.829334  182797 system_pods.go:89] "coredns-66bff467f8-x9prc" [9c491755-c38c-4e53-9609-a882488493a2] Running
	I0717 18:55:32.829340  182797 system_pods.go:89] "etcd-ingress-addon-legacy-795879" [539ad4e9-1366-4aa6-bbe8-be2bda66eb79] Running
	I0717 18:55:32.829347  182797 system_pods.go:89] "kindnet-qhnfh" [5c1b5f79-7e0e-4b81-915c-6e9f93b8ec63] Running
	I0717 18:55:32.829351  182797 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-795879" [846d70d1-f129-4ceb-a7e7-10faefffde84] Running
	I0717 18:55:32.829356  182797 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-795879" [314f9440-ece4-4f6d-8d81-7c99ec44a0bc] Running
	I0717 18:55:32.829359  182797 system_pods.go:89] "kube-proxy-klr5w" [106879bf-24bd-4276-af96-c59bd399defb] Running
	I0717 18:55:32.829364  182797 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-795879" [84dd2534-511f-4ce5-b208-2f987c37fa97] Running
	I0717 18:55:32.829367  182797 system_pods.go:89] "storage-provisioner" [ebab4aff-16f6-4298-9319-476443ae9620] Running
	I0717 18:55:32.829374  182797 system_pods.go:126] duration metric: took 202.663826ms to wait for k8s-apps to be running ...
	I0717 18:55:32.829380  182797 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 18:55:32.829427  182797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 18:55:32.840518  182797 system_svc.go:56] duration metric: took 11.128084ms WaitForService to wait for kubelet.
	I0717 18:55:32.840549  182797 kubeadm.go:581] duration metric: took 18.068944297s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 18:55:32.840578  182797 node_conditions.go:102] verifying NodePressure condition ...
	I0717 18:55:33.024071  182797 request.go:628] Waited for 183.338778ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0717 18:55:33.027088  182797 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0717 18:55:33.027115  182797 node_conditions.go:123] node cpu capacity is 8
	I0717 18:55:33.027127  182797 node_conditions.go:105] duration metric: took 186.54353ms to run NodePressure ...
	I0717 18:55:33.027138  182797 start.go:228] waiting for startup goroutines ...
	I0717 18:55:33.027146  182797 start.go:233] waiting for cluster config update ...
	I0717 18:55:33.027156  182797 start.go:242] writing updated cluster config ...
	I0717 18:55:33.027423  182797 ssh_runner.go:195] Run: rm -f paused
	I0717 18:55:33.075198  182797 start.go:578] kubectl: 1.27.3, cluster: 1.18.20 (minor skew: 9)
	I0717 18:55:33.077811  182797 out.go:177] 
	W0717 18:55:33.079782  182797 out.go:239] ! /usr/local/bin/kubectl is version 1.27.3, which may have incompatibilities with Kubernetes 1.18.20.
	I0717 18:55:33.081736  182797 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0717 18:55:33.083554  182797 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-795879" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Jul 17 18:58:20 ingress-addon-legacy-795879 crio[955]: time="2023-07-17 18:58:20.947379735Z" level=info msg="Started container" PID=4726 containerID=8f4a3d91f2e4e18328136094d6ad82e725b0fba626181019689b312133957682 description=default/hello-world-app-5f5d8b66bb-n8zrw/hello-world-app id=f7a2c07d-b911-45c9-8060-43147ace6f90 name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=0758d124cdbbde6a44dab6bdc0c249d4ec95b518d873333154b46e77afe528ec
	Jul 17 18:58:29 ingress-addon-legacy-795879 crio[955]: time="2023-07-17 18:58:29.678201962Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=60ac8ec1-be2b-489d-9eb6-eba98d6f4913 name=/runtime.v1alpha2.ImageService/ImageStatus
	Jul 17 18:58:35 ingress-addon-legacy-795879 crio[955]: time="2023-07-17 18:58:35.678903335Z" level=info msg="Stopping pod sandbox: 85b80179911e69d64582a1a93063b5efbf6bf61867db3ca5ace5c78e2381a094" id=709014da-8a31-4fd7-aecc-f2d77fb59cae name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 17 18:58:35 ingress-addon-legacy-795879 crio[955]: time="2023-07-17 18:58:35.680054396Z" level=info msg="Stopped pod sandbox: 85b80179911e69d64582a1a93063b5efbf6bf61867db3ca5ace5c78e2381a094" id=709014da-8a31-4fd7-aecc-f2d77fb59cae name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 17 18:58:36 ingress-addon-legacy-795879 crio[955]: time="2023-07-17 18:58:36.159361638Z" level=info msg="Stopping pod sandbox: 85b80179911e69d64582a1a93063b5efbf6bf61867db3ca5ace5c78e2381a094" id=1a6ed61b-5de4-4f5e-a1ea-47b99e3c3bd8 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 17 18:58:36 ingress-addon-legacy-795879 crio[955]: time="2023-07-17 18:58:36.159416098Z" level=info msg="Stopped pod sandbox (already stopped): 85b80179911e69d64582a1a93063b5efbf6bf61867db3ca5ace5c78e2381a094" id=1a6ed61b-5de4-4f5e-a1ea-47b99e3c3bd8 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 17 18:58:36 ingress-addon-legacy-795879 crio[955]: time="2023-07-17 18:58:36.908582091Z" level=info msg="Stopping container: 5c5e114fb28093532182603a1b266473a0391ce4a5da9e1eb061b53e128070f0 (timeout: 2s)" id=7b5ea351-63ed-4b54-8b96-b23896a7cab5 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jul 17 18:58:36 ingress-addon-legacy-795879 crio[955]: time="2023-07-17 18:58:36.911061432Z" level=info msg="Stopping container: 5c5e114fb28093532182603a1b266473a0391ce4a5da9e1eb061b53e128070f0 (timeout: 2s)" id=445ababc-e6fd-485e-88ea-a2188e3da3b5 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jul 17 18:58:37 ingress-addon-legacy-795879 crio[955]: time="2023-07-17 18:58:37.677652952Z" level=info msg="Stopping pod sandbox: 85b80179911e69d64582a1a93063b5efbf6bf61867db3ca5ace5c78e2381a094" id=7519318d-9288-4ccb-8ef7-5f2ac7359726 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 17 18:58:37 ingress-addon-legacy-795879 crio[955]: time="2023-07-17 18:58:37.677716859Z" level=info msg="Stopped pod sandbox (already stopped): 85b80179911e69d64582a1a93063b5efbf6bf61867db3ca5ace5c78e2381a094" id=7519318d-9288-4ccb-8ef7-5f2ac7359726 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 17 18:58:38 ingress-addon-legacy-795879 crio[955]: time="2023-07-17 18:58:38.918764872Z" level=warning msg="Stopping container 5c5e114fb28093532182603a1b266473a0391ce4a5da9e1eb061b53e128070f0 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=7b5ea351-63ed-4b54-8b96-b23896a7cab5 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jul 17 18:58:38 ingress-addon-legacy-795879 conmon[3381]: conmon 5c5e114fb28093532182 <ninfo>: container 3393 exited with status 137
	Jul 17 18:58:39 ingress-addon-legacy-795879 crio[955]: time="2023-07-17 18:58:39.079777523Z" level=info msg="Stopped container 5c5e114fb28093532182603a1b266473a0391ce4a5da9e1eb061b53e128070f0: ingress-nginx/ingress-nginx-controller-7fcf777cb7-nk7r7/controller" id=7b5ea351-63ed-4b54-8b96-b23896a7cab5 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jul 17 18:58:39 ingress-addon-legacy-795879 crio[955]: time="2023-07-17 18:58:39.079823808Z" level=info msg="Stopped container 5c5e114fb28093532182603a1b266473a0391ce4a5da9e1eb061b53e128070f0: ingress-nginx/ingress-nginx-controller-7fcf777cb7-nk7r7/controller" id=445ababc-e6fd-485e-88ea-a2188e3da3b5 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Jul 17 18:58:39 ingress-addon-legacy-795879 crio[955]: time="2023-07-17 18:58:39.080479209Z" level=info msg="Stopping pod sandbox: a66d8e29c285037476e7b6c0a15c617d0744b8ffa9a70f7c21f9b17f94d6799e" id=7023f798-f06b-40c1-ba6e-2e0629dc8d5a name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 17 18:58:39 ingress-addon-legacy-795879 crio[955]: time="2023-07-17 18:58:39.080488464Z" level=info msg="Stopping pod sandbox: a66d8e29c285037476e7b6c0a15c617d0744b8ffa9a70f7c21f9b17f94d6799e" id=92e0c9d2-ac6a-47bb-8d5f-436e709260e6 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 17 18:58:39 ingress-addon-legacy-795879 crio[955]: time="2023-07-17 18:58:39.083324687Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-IKWG2HNHSDJ573Z2 - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-EX7CGZVYFROBR5II - [0:0]\n-X KUBE-HP-EX7CGZVYFROBR5II\n-X KUBE-HP-IKWG2HNHSDJ573Z2\nCOMMIT\n"
	Jul 17 18:58:39 ingress-addon-legacy-795879 crio[955]: time="2023-07-17 18:58:39.084663279Z" level=info msg="Closing host port tcp:80"
	Jul 17 18:58:39 ingress-addon-legacy-795879 crio[955]: time="2023-07-17 18:58:39.084716627Z" level=info msg="Closing host port tcp:443"
	Jul 17 18:58:39 ingress-addon-legacy-795879 crio[955]: time="2023-07-17 18:58:39.085748313Z" level=info msg="Host port tcp:80 does not have an open socket"
	Jul 17 18:58:39 ingress-addon-legacy-795879 crio[955]: time="2023-07-17 18:58:39.085769294Z" level=info msg="Host port tcp:443 does not have an open socket"
	Jul 17 18:58:39 ingress-addon-legacy-795879 crio[955]: time="2023-07-17 18:58:39.085905825Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-nk7r7 Namespace:ingress-nginx ID:a66d8e29c285037476e7b6c0a15c617d0744b8ffa9a70f7c21f9b17f94d6799e UID:87b3b0b6-7f9f-4328-b050-06a3a6f6a17b NetNS:/var/run/netns/4028bbed-310c-46fc-88b4-f1e1b80fd2b2 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 17 18:58:39 ingress-addon-legacy-795879 crio[955]: time="2023-07-17 18:58:39.086030315Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-nk7r7 from CNI network \"kindnet\" (type=ptp)"
	Jul 17 18:58:39 ingress-addon-legacy-795879 crio[955]: time="2023-07-17 18:58:39.129419286Z" level=info msg="Stopped pod sandbox: a66d8e29c285037476e7b6c0a15c617d0744b8ffa9a70f7c21f9b17f94d6799e" id=7023f798-f06b-40c1-ba6e-2e0629dc8d5a name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Jul 17 18:58:39 ingress-addon-legacy-795879 crio[955]: time="2023-07-17 18:58:39.129544496Z" level=info msg="Stopped pod sandbox (already stopped): a66d8e29c285037476e7b6c0a15c617d0744b8ffa9a70f7c21f9b17f94d6799e" id=92e0c9d2-ac6a-47bb-8d5f-436e709260e6 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8f4a3d91f2e4e       gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea            23 seconds ago      Running             hello-world-app           0                   0758d124cdbbd       hello-world-app-5f5d8b66bb-n8zrw
	2a6dd615b2f28       docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6                    2 minutes ago       Running             nginx                     0                   564e65e189633       nginx
	5c5e114fb2809       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   a66d8e29c2850       ingress-nginx-controller-7fcf777cb7-nk7r7
	1e36000385a3c       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   a565a901a195a       ingress-nginx-admission-patch-wn2xb
	097c659481c94       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   7b01c0ef6d3be       ingress-nginx-admission-create-9vgq8
	b7fa0ade34d6b       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   85c4c2f372968       coredns-66bff467f8-x9prc
	a03c459738ada       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   3f88ea38a9dc5       storage-provisioner
	5d624503b059a       docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974                 3 minutes ago       Running             kindnet-cni               0                   0a44384638269       kindnet-qhnfh
	1828ec2df931d       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   4016fdd5e80c5       kube-proxy-klr5w
	176c9a2e0cc17       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   3 minutes ago       Running             etcd                      0                   6d02f4c4a4280       etcd-ingress-addon-legacy-795879
	3a455ae5945af       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   3 minutes ago       Running             kube-controller-manager   0                   55a4501e907cb       kube-controller-manager-ingress-addon-legacy-795879
	6b0fb974bad9b       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   3 minutes ago       Running             kube-apiserver            0                   7875f95643f64       kube-apiserver-ingress-addon-legacy-795879
	b6f1b8c5a3b63       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   3 minutes ago       Running             kube-scheduler            0                   b6fa1ba0bdc49       kube-scheduler-ingress-addon-legacy-795879
	
	* 
	* ==> coredns [b7fa0ade34d6bcd2784ad638c4218afbae19eab5f6c5a212c526fc025bd2f910] <==
	* [INFO] 10.244.0.5:52382 - 29213 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.005233792s
	[INFO] 10.244.0.5:44869 - 47672 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006807357s
	[INFO] 10.244.0.5:49162 - 34648 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006775774s
	[INFO] 10.244.0.5:52382 - 52809 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006583933s
	[INFO] 10.244.0.5:49641 - 24824 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006842302s
	[INFO] 10.244.0.5:44851 - 14221 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006882978s
	[INFO] 10.244.0.5:36138 - 64421 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006978446s
	[INFO] 10.244.0.5:39223 - 31745 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00699393s
	[INFO] 10.244.0.5:53634 - 17114 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007087034s
	[INFO] 10.244.0.5:53634 - 33873 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006372216s
	[INFO] 10.244.0.5:44851 - 61253 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006664935s
	[INFO] 10.244.0.5:49641 - 16936 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006710241s
	[INFO] 10.244.0.5:52382 - 9577 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006800734s
	[INFO] 10.244.0.5:53634 - 26340 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000066633s
	[INFO] 10.244.0.5:39223 - 33825 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006686699s
	[INFO] 10.244.0.5:49162 - 7806 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006972106s
	[INFO] 10.244.0.5:44851 - 11763 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000058933s
	[INFO] 10.244.0.5:39223 - 54962 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000071107s
	[INFO] 10.244.0.5:49641 - 30381 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000286072s
	[INFO] 10.244.0.5:44869 - 10153 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007371962s
	[INFO] 10.244.0.5:52382 - 62107 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000368599s
	[INFO] 10.244.0.5:36138 - 35714 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.007047289s
	[INFO] 10.244.0.5:49162 - 24567 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000272316s
	[INFO] 10.244.0.5:44869 - 44703 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00006291s
	[INFO] 10.244.0.5:36138 - 43420 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000055608s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-795879
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-795879
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5
	                    minikube.k8s.io/name=ingress-addon-legacy-795879
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T18_54_59_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 18:54:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-795879
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 18:58:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 18:58:30 +0000   Mon, 17 Jul 2023 18:54:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 18:58:30 +0000   Mon, 17 Jul 2023 18:54:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 18:58:30 +0000   Mon, 17 Jul 2023 18:54:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 18:58:30 +0000   Mon, 17 Jul 2023 18:55:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-795879
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	System Info:
	  Machine ID:                 1e40e6e4fc0248cea141fedcfab07e6a
	  System UUID:                7ad419a9-1169-408a-9841-357ecf2df064
	  Boot ID:                    72066744-0b12-457f-a61f-5086cdf4a210
	  Kernel Version:             5.15.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-n8zrw                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  kube-system                 coredns-66bff467f8-x9prc                               100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m30s
	  kube-system                 etcd-ingress-addon-legacy-795879                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m45s
	  kube-system                 kindnet-qhnfh                                          100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m30s
	  kube-system                 kube-apiserver-ingress-addon-legacy-795879             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-795879    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m45s
	  kube-system                 kube-proxy-klr5w                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m30s
	  kube-system                 kube-scheduler-ingress-addon-legacy-795879             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m45s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             120Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 3m45s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m45s  kubelet     Node ingress-addon-legacy-795879 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m45s  kubelet     Node ingress-addon-legacy-795879 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m45s  kubelet     Node ingress-addon-legacy-795879 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m29s  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m25s  kubelet     Node ingress-addon-legacy-795879 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.005023] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.008130] FS-Cache: N-cookie d=00000000b95e8ad8{9p.inode} n=00000000642f0408
	[  +0.008754] FS-Cache: N-key=[8] '89a30f0200000000'
	[  +0.281824] FS-Cache: Duplicate cookie detected
	[  +0.004719] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006751] FS-Cache: O-cookie d=00000000b95e8ad8{9p.inode} n=0000000025805cad
	[  +0.007382] FS-Cache: O-key=[8] '94a30f0200000000'
	[  +0.004943] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006556] FS-Cache: N-cookie d=00000000b95e8ad8{9p.inode} n=000000004cb50f63
	[  +0.007445] FS-Cache: N-key=[8] '94a30f0200000000'
	[Jul17 18:54] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jul17 18:56] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: f2 e3 c3 88 e1 8a 72 c9 83 e6 6a 29 08 00
	[  +1.019187] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: f2 e3 c3 88 e1 8a 72 c9 83 e6 6a 29 08 00
	[  +2.015826] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: f2 e3 c3 88 e1 8a 72 c9 83 e6 6a 29 08 00
	[  +4.127678] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: f2 e3 c3 88 e1 8a 72 c9 83 e6 6a 29 08 00
	[  +8.191406] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: f2 e3 c3 88 e1 8a 72 c9 83 e6 6a 29 08 00
	[ +16.126801] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: f2 e3 c3 88 e1 8a 72 c9 83 e6 6a 29 08 00
	[Jul17 18:57] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f2 e3 c3 88 e1 8a 72 c9 83 e6 6a 29 08 00
	
	* 
	* ==> etcd [176c9a2e0cc17089c219460be072e7863de4c2670e8ab6d4f01654f48c24cb59] <==
	* 2023-07-17 18:54:52.385375 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/07/17 18:54:52 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-07-17 18:54:52.385966 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-07-17 18:54:52.386267 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-07-17 18:54:52.386362 I | embed: listening for peers on 192.168.49.2:2380
	2023-07-17 18:54:52.386471 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/07/17 18:54:53 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/07/17 18:54:53 INFO: aec36adc501070cc became candidate at term 2
	raft2023/07/17 18:54:53 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/07/17 18:54:53 INFO: aec36adc501070cc became leader at term 2
	raft2023/07/17 18:54:53 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-07-17 18:54:53.173923 I | etcdserver: setting up the initial cluster version to 3.4
	2023-07-17 18:54:53.175019 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-07-17 18:54:53.175093 I | etcdserver/api: enabled capabilities for version 3.4
	2023-07-17 18:54:53.175111 I | etcdserver: published {Name:ingress-addon-legacy-795879 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-07-17 18:54:53.175132 I | embed: ready to serve client requests
	2023-07-17 18:54:53.175155 I | embed: ready to serve client requests
	2023-07-17 18:54:53.177512 I | embed: serving client requests on 192.168.49.2:2379
	2023-07-17 18:54:53.177605 I | embed: serving client requests on 127.0.0.1:2379
	2023-07-17 18:55:20.102052 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-66bff467f8-x9prc\" " with result "range_response_count:1 size:3753" took too long (196.137216ms) to execute
	2023-07-17 18:55:20.102253 W | etcdserver: request "header:<ID:8128022495263840434 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/ingress-addon-legacy-795879\" mod_revision:384 > success:<request_put:<key:\"/registry/minions/ingress-addon-legacy-795879\" value_size:6392 >> failure:<request_range:<key:\"/registry/minions/ingress-addon-legacy-795879\" > >>" with result "size:16" took too long (116.629343ms) to execute
	2023-07-17 18:55:20.134669 W | etcdserver: read-only range request "key:\"/registry/minions/ingress-addon-legacy-795879\" " with result "range_response_count:1 size:6459" took too long (164.280802ms) to execute
	2023-07-17 18:55:20.257760 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/coredns-66bff467f8-x9prc.1772bc8ad4e9c66a\" " with result "range_response_count:1 size:829" took too long (151.627045ms) to execute
	2023-07-17 18:55:20.257863 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:8 size:37328" took too long (120.376291ms) to execute
	2023-07-17 18:55:20.257947 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-66bff467f8-x9prc\" " with result "range_response_count:1 size:3753" took too long (151.798393ms) to execute
	
	* 
	* ==> kernel <==
	*  18:58:44 up  3:41,  0 users,  load average: 0.14, 0.77, 1.74
	Linux ingress-addon-legacy-795879 5.15.0-1037-gcp #45~20.04.1-Ubuntu SMP Thu Jun 22 08:31:09 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [5d624503b059a86b81a763a73c903c69b8263182aba05a24529b0612ae0055da] <==
	* I0717 18:56:37.426931       1 main.go:227] handling current node
	I0717 18:56:47.431678       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 18:56:47.431707       1 main.go:227] handling current node
	I0717 18:56:57.443009       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 18:56:57.443036       1 main.go:227] handling current node
	I0717 18:57:07.446278       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 18:57:07.446307       1 main.go:227] handling current node
	I0717 18:57:17.455245       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 18:57:17.455271       1 main.go:227] handling current node
	I0717 18:57:27.459080       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 18:57:27.459106       1 main.go:227] handling current node
	I0717 18:57:37.470995       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 18:57:37.471026       1 main.go:227] handling current node
	I0717 18:57:47.475106       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 18:57:47.475132       1 main.go:227] handling current node
	I0717 18:57:57.487452       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 18:57:57.487480       1 main.go:227] handling current node
	I0717 18:58:07.498885       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 18:58:07.498914       1 main.go:227] handling current node
	I0717 18:58:17.502828       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 18:58:17.502850       1 main.go:227] handling current node
	I0717 18:58:27.509556       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 18:58:27.509583       1 main.go:227] handling current node
	I0717 18:58:37.513461       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0717 18:58:37.513484       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [6b0fb974bad9bb8421275be4318713a85e414002f68d4270c75a98e8b737da16] <==
	* I0717 18:54:56.273807       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	E0717 18:54:56.274792       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0717 18:54:56.373234       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 18:54:56.373299       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0717 18:54:56.373316       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 18:54:56.373529       1 cache.go:39] Caches are synced for autoregister controller
	I0717 18:54:56.376226       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0717 18:54:57.272255       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0717 18:54:57.272395       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0717 18:54:57.277217       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0717 18:54:57.280301       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0717 18:54:57.280323       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0717 18:54:57.565003       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 18:54:57.600706       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0717 18:54:57.702429       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0717 18:54:57.703246       1 controller.go:609] quota admission added evaluator for: endpoints
	I0717 18:54:57.706344       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 18:54:58.087598       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 18:54:58.622108       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0717 18:54:59.315059       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0717 18:54:59.475135       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0717 18:55:14.376251       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0717 18:55:14.666633       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0717 18:55:33.717494       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0717 18:55:59.546939       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [3a455ae5945afbaaa12930a8e7ad2f30bf3f283e9e6d3a4d68b6abaad2665646] <==
	* I0717 18:55:14.579351       1 shared_informer.go:230] Caches are synced for expand 
	I0717 18:55:14.581131       1 shared_informer.go:230] Caches are synced for resource quota 
	I0717 18:55:14.629045       1 shared_informer.go:230] Caches are synced for taint 
	I0717 18:55:14.629156       1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: 
	I0717 18:55:14.629180       1 taint_manager.go:187] Starting NoExecuteTaintManager
	W0717 18:55:14.629228       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-795879. Assuming now as a timestamp.
	I0717 18:55:14.629265       1 node_lifecycle_controller.go:1199] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0717 18:55:14.629324       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-795879", UID:"165b43bc-4769-4e20-b30f-f6115b6cd01e", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-795879 event: Registered Node ingress-addon-legacy-795879 in Controller
	I0717 18:55:14.661396       1 shared_informer.go:230] Caches are synced for deployment 
	I0717 18:55:14.661775       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0717 18:55:14.661851       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0717 18:55:14.669436       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"9145665b-a8b9-446c-827c-bee30d3ff56f", APIVersion:"apps/v1", ResourceVersion:"323", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 1
	I0717 18:55:14.676849       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"ae7a6d59-32f8-4024-9feb-fda1b7e9d18c", APIVersion:"apps/v1", ResourceVersion:"342", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-x9prc
	I0717 18:55:14.679574       1 shared_informer.go:230] Caches are synced for resource quota 
	I0717 18:55:14.680399       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0717 18:55:24.679265       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0717 18:55:24.679507       1 event.go:278] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"storage-provisioner", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Cancelling deletion of Pod kube-system/storage-provisioner
	I0717 18:55:33.709104       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"bdf7af8f-06a6-477c-a6ed-9ebf60e6652a", APIVersion:"apps/v1", ResourceVersion:"454", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0717 18:55:33.715712       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"6d1fb519-53a1-48b3-b6e6-351819176eb4", APIVersion:"apps/v1", ResourceVersion:"455", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-nk7r7
	I0717 18:55:33.765221       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"d29c0023-3ee9-4a54-8e73-23928c78fb03", APIVersion:"batch/v1", ResourceVersion:"460", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-9vgq8
	I0717 18:55:33.782740       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"a563dd3d-4b2a-4522-bfde-ac763b1a023f", APIVersion:"batch/v1", ResourceVersion:"471", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-wn2xb
	I0717 18:55:35.808793       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"a563dd3d-4b2a-4522-bfde-ac763b1a023f", APIVersion:"batch/v1", ResourceVersion:"480", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0717 18:55:35.816328       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"d29c0023-3ee9-4a54-8e73-23928c78fb03", APIVersion:"batch/v1", ResourceVersion:"469", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0717 18:58:19.444011       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"580c1d2d-4322-44c6-bcf1-5d4785e480b6", APIVersion:"apps/v1", ResourceVersion:"691", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0717 18:58:19.449696       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"797cb91b-2a2e-4f47-918a-2d8cd08b9818", APIVersion:"apps/v1", ResourceVersion:"692", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-n8zrw
	
	* 
	* ==> kube-proxy [1828ec2df931de52596f7484b05dbcecb6fa3d3bc707a45be74c2f9c387d2b97] <==
	* W0717 18:55:15.265423       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0717 18:55:15.272055       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0717 18:55:15.272081       1 server_others.go:186] Using iptables Proxier.
	I0717 18:55:15.272370       1 server.go:583] Version: v1.18.20
	I0717 18:55:15.272954       1 config.go:133] Starting endpoints config controller
	I0717 18:55:15.272977       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0717 18:55:15.272977       1 config.go:315] Starting service config controller
	I0717 18:55:15.272994       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0717 18:55:15.373145       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0717 18:55:15.373171       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [b6f1b8c5a3b63f4778ea2ace7d6bfbf36472320fe6a5a0079ac58e5472f9c114] <==
	* I0717 18:54:52.988756       1 serving.go:313] Generated self-signed cert in-memory
	W0717 18:54:56.285778       1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 18:54:56.285839       1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 18:54:56.285851       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 18:54:56.285859       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 18:54:56.369797       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0717 18:54:56.369826       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0717 18:54:56.371935       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 18:54:56.372044       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 18:54:56.372908       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0717 18:54:56.372999       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0717 18:54:56.375149       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 18:54:56.461917       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 18:54:56.462195       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 18:54:56.462195       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 18:54:56.476352       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 18:54:56.476352       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0717 18:54:56.476542       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 18:54:56.476443       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0717 18:54:56.476660       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 18:54:56.476764       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 18:54:56.476765       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 18:54:56.476890       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 18:54:57.427661       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0717 18:54:57.772294       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* Jul 17 18:58:01 ingress-addon-legacy-795879 kubelet[1838]: E0717 18:58:01.678343    1838 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jul 17 18:58:01 ingress-addon-legacy-795879 kubelet[1838]: E0717 18:58:01.678382    1838 pod_workers.go:191] Error syncing pod 288ee7e0-e4d5-4197-a3b8-ca3d1aba2c3f ("kube-ingress-dns-minikube_kube-system(288ee7e0-e4d5-4197-a3b8-ca3d1aba2c3f)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Jul 17 18:58:15 ingress-addon-legacy-795879 kubelet[1838]: E0717 18:58:15.678450    1838 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jul 17 18:58:15 ingress-addon-legacy-795879 kubelet[1838]: E0717 18:58:15.678496    1838 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jul 17 18:58:15 ingress-addon-legacy-795879 kubelet[1838]: E0717 18:58:15.678553    1838 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jul 17 18:58:15 ingress-addon-legacy-795879 kubelet[1838]: E0717 18:58:15.678595    1838 pod_workers.go:191] Error syncing pod 288ee7e0-e4d5-4197-a3b8-ca3d1aba2c3f ("kube-ingress-dns-minikube_kube-system(288ee7e0-e4d5-4197-a3b8-ca3d1aba2c3f)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Jul 17 18:58:19 ingress-addon-legacy-795879 kubelet[1838]: I0717 18:58:19.452933    1838 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jul 17 18:58:19 ingress-addon-legacy-795879 kubelet[1838]: I0717 18:58:19.603117    1838 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-9dhx5" (UniqueName: "kubernetes.io/secret/1969a06d-2fbe-4932-bdd4-259b8eefa0d0-default-token-9dhx5") pod "hello-world-app-5f5d8b66bb-n8zrw" (UID: "1969a06d-2fbe-4932-bdd4-259b8eefa0d0")
	Jul 17 18:58:19 ingress-addon-legacy-795879 kubelet[1838]: W0717 18:58:19.822036    1838 manager.go:1131] Failed to process watch event {EventType:0 Name:/docker/85331f8d3f3b82d198bd6551f0a039b3e945643ef9fa0dcaf8d817aaee089895/crio-0758d124cdbbde6a44dab6bdc0c249d4ec95b518d873333154b46e77afe528ec WatchSource:0}: Error finding container 0758d124cdbbde6a44dab6bdc0c249d4ec95b518d873333154b46e77afe528ec: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc0017a2ec0 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x750800) %!!(MISSING)s(func() error=0x750790)}
	Jul 17 18:58:29 ingress-addon-legacy-795879 kubelet[1838]: E0717 18:58:29.678519    1838 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jul 17 18:58:29 ingress-addon-legacy-795879 kubelet[1838]: E0717 18:58:29.678564    1838 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jul 17 18:58:29 ingress-addon-legacy-795879 kubelet[1838]: E0717 18:58:29.678622    1838 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Jul 17 18:58:29 ingress-addon-legacy-795879 kubelet[1838]: E0717 18:58:29.678656    1838 pod_workers.go:191] Error syncing pod 288ee7e0-e4d5-4197-a3b8-ca3d1aba2c3f ("kube-ingress-dns-minikube_kube-system(288ee7e0-e4d5-4197-a3b8-ca3d1aba2c3f)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Jul 17 18:58:35 ingress-addon-legacy-795879 kubelet[1838]: I0717 18:58:35.243370    1838 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-bg2qb" (UniqueName: "kubernetes.io/secret/288ee7e0-e4d5-4197-a3b8-ca3d1aba2c3f-minikube-ingress-dns-token-bg2qb") pod "288ee7e0-e4d5-4197-a3b8-ca3d1aba2c3f" (UID: "288ee7e0-e4d5-4197-a3b8-ca3d1aba2c3f")
	Jul 17 18:58:35 ingress-addon-legacy-795879 kubelet[1838]: I0717 18:58:35.245445    1838 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/288ee7e0-e4d5-4197-a3b8-ca3d1aba2c3f-minikube-ingress-dns-token-bg2qb" (OuterVolumeSpecName: "minikube-ingress-dns-token-bg2qb") pod "288ee7e0-e4d5-4197-a3b8-ca3d1aba2c3f" (UID: "288ee7e0-e4d5-4197-a3b8-ca3d1aba2c3f"). InnerVolumeSpecName "minikube-ingress-dns-token-bg2qb". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 18:58:35 ingress-addon-legacy-795879 kubelet[1838]: I0717 18:58:35.343743    1838 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-bg2qb" (UniqueName: "kubernetes.io/secret/288ee7e0-e4d5-4197-a3b8-ca3d1aba2c3f-minikube-ingress-dns-token-bg2qb") on node "ingress-addon-legacy-795879" DevicePath ""
	Jul 17 18:58:36 ingress-addon-legacy-795879 kubelet[1838]: E0717 18:58:36.909671    1838 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-nk7r7.1772bcb9ea97ffa2", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-nk7r7", UID:"87b3b0b6-7f9f-4328-b050-06a3a6f6a17b", APIVersion:"v1", ResourceVersion:"461", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-795879"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc12581b7362167a2, ext:217625697789, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc12581b7362167a2, ext:217625697789, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-nk7r7.1772bcb9ea97ffa2" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jul 17 18:58:36 ingress-addon-legacy-795879 kubelet[1838]: E0717 18:58:36.913933    1838 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-nk7r7.1772bcb9ea97ffa2", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-nk7r7", UID:"87b3b0b6-7f9f-4328-b050-06a3a6f6a17b", APIVersion:"v1", ResourceVersion:"461", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-795879"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc12581b7362167a2, ext:217625697789, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc12581b73648e24b, ext:217628285087, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-nk7r7.1772bcb9ea97ffa2" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jul 17 18:58:39 ingress-addon-legacy-795879 kubelet[1838]: W0717 18:58:39.154049    1838 pod_container_deletor.go:77] Container "a66d8e29c285037476e7b6c0a15c617d0744b8ffa9a70f7c21f9b17f94d6799e" not found in pod's containers
	Jul 17 18:58:39 ingress-addon-legacy-795879 kubelet[1838]: I0717 18:58:39.267831    1838 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-sftht" (UniqueName: "kubernetes.io/secret/87b3b0b6-7f9f-4328-b050-06a3a6f6a17b-ingress-nginx-token-sftht") pod "87b3b0b6-7f9f-4328-b050-06a3a6f6a17b" (UID: "87b3b0b6-7f9f-4328-b050-06a3a6f6a17b")
	Jul 17 18:58:39 ingress-addon-legacy-795879 kubelet[1838]: I0717 18:58:39.267907    1838 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/87b3b0b6-7f9f-4328-b050-06a3a6f6a17b-webhook-cert") pod "87b3b0b6-7f9f-4328-b050-06a3a6f6a17b" (UID: "87b3b0b6-7f9f-4328-b050-06a3a6f6a17b")
	Jul 17 18:58:39 ingress-addon-legacy-795879 kubelet[1838]: I0717 18:58:39.270058    1838 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87b3b0b6-7f9f-4328-b050-06a3a6f6a17b-ingress-nginx-token-sftht" (OuterVolumeSpecName: "ingress-nginx-token-sftht") pod "87b3b0b6-7f9f-4328-b050-06a3a6f6a17b" (UID: "87b3b0b6-7f9f-4328-b050-06a3a6f6a17b"). InnerVolumeSpecName "ingress-nginx-token-sftht". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 18:58:39 ingress-addon-legacy-795879 kubelet[1838]: I0717 18:58:39.270162    1838 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87b3b0b6-7f9f-4328-b050-06a3a6f6a17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "87b3b0b6-7f9f-4328-b050-06a3a6f6a17b" (UID: "87b3b0b6-7f9f-4328-b050-06a3a6f6a17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 17 18:58:39 ingress-addon-legacy-795879 kubelet[1838]: I0717 18:58:39.368256    1838 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/87b3b0b6-7f9f-4328-b050-06a3a6f6a17b-webhook-cert") on node "ingress-addon-legacy-795879" DevicePath ""
	Jul 17 18:58:39 ingress-addon-legacy-795879 kubelet[1838]: I0717 18:58:39.368299    1838 reconciler.go:319] Volume detached for volume "ingress-nginx-token-sftht" (UniqueName: "kubernetes.io/secret/87b3b0b6-7f9f-4328-b050-06a3a6f6a17b-ingress-nginx-token-sftht") on node "ingress-addon-legacy-795879" DevicePath ""
	
	* 
	* ==> storage-provisioner [a03c459738ada9ddfbef126b717b7fb85eb3c6bcff10510bac5c71f5973788f0] <==
	* I0717 18:55:21.268960       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 18:55:21.278071       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 18:55:21.278115       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0717 18:55:21.407243       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0717 18:55:21.407398       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-795879_705dfe80-85c6-41e3-8d76-6e883818a407!
	I0717 18:55:21.407396       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"49ca9421-20b6-4a98-a5d9-a48d1e49f128", APIVersion:"v1", ResourceVersion:"393", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-795879_705dfe80-85c6-41e3-8d76-6e883818a407 became leader
	I0717 18:55:21.508355       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-795879_705dfe80-85c6-41e3-8d76-6e883818a407!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-795879 -n ingress-addon-legacy-795879
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-795879 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (182.39s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-549411 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-549411 -- exec busybox-67b7f59bb-8mh6q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-549411 -- exec busybox-67b7f59bb-8mh6q -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-549411 -- exec busybox-67b7f59bb-8mh6q -- sh -c "ping -c 1 192.168.58.1": exit status 1 (173.645753ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-67b7f59bb-8mh6q): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-549411 -- exec busybox-67b7f59bb-rww5s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-549411 -- exec busybox-67b7f59bb-rww5s -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-549411 -- exec busybox-67b7f59bb-rww5s -- sh -c "ping -c 1 192.168.58.1": exit status 1 (167.932043ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-67b7f59bb-rww5s): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-549411
helpers_test.go:235: (dbg) docker inspect multinode-549411:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "45cef728eef070a7d16b710f7f2faee4f9d97e87c3d1ccb69b7e1c7b3c92a882",
	        "Created": "2023-07-17T19:03:43.158949563Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 228996,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T19:03:43.463318122Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/45cef728eef070a7d16b710f7f2faee4f9d97e87c3d1ccb69b7e1c7b3c92a882/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/45cef728eef070a7d16b710f7f2faee4f9d97e87c3d1ccb69b7e1c7b3c92a882/hostname",
	        "HostsPath": "/var/lib/docker/containers/45cef728eef070a7d16b710f7f2faee4f9d97e87c3d1ccb69b7e1c7b3c92a882/hosts",
	        "LogPath": "/var/lib/docker/containers/45cef728eef070a7d16b710f7f2faee4f9d97e87c3d1ccb69b7e1c7b3c92a882/45cef728eef070a7d16b710f7f2faee4f9d97e87c3d1ccb69b7e1c7b3c92a882-json.log",
	        "Name": "/multinode-549411",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-549411:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-549411",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c3d07914b13951bf136dcc09b619906f13c7515fb13a72f41c6dc36660d6ea91-init/diff:/var/lib/docker/overlay2/d8b40fcaabfbbb6eb20cfe7c35f752b4babaa96b29803507d5f63d9939e9e0f0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c3d07914b13951bf136dcc09b619906f13c7515fb13a72f41c6dc36660d6ea91/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c3d07914b13951bf136dcc09b619906f13c7515fb13a72f41c6dc36660d6ea91/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c3d07914b13951bf136dcc09b619906f13c7515fb13a72f41c6dc36660d6ea91/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-549411",
	                "Source": "/var/lib/docker/volumes/multinode-549411/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-549411",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-549411",
	                "name.minikube.sigs.k8s.io": "multinode-549411",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "120ebbeef3d3e52a57a1138cf818e1c811bdb752c41b722fa7e4593a4729c59a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32847"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32846"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32843"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32845"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32844"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/120ebbeef3d3",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-549411": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "45cef728eef0",
	                        "multinode-549411"
	                    ],
	                    "NetworkID": "743d16d8288979c712ba0b691c7230f41ad17b6ab9b7dc3f278a028c9f815626",
	                    "EndpointID": "d603e336e42ec42641776bf396be97f1620940764b445ae1787aff75a8dc20e7",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-549411 -n multinode-549411
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-549411 logs -n 25: (1.311793127s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-848583                           | mount-start-2-848583 | jenkins | v1.30.1 | 17 Jul 23 19:03 UTC | 17 Jul 23 19:03 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-848583 ssh -- ls                    | mount-start-2-848583 | jenkins | v1.30.1 | 17 Jul 23 19:03 UTC | 17 Jul 23 19:03 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-830426                           | mount-start-1-830426 | jenkins | v1.30.1 | 17 Jul 23 19:03 UTC | 17 Jul 23 19:03 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-848583 ssh -- ls                    | mount-start-2-848583 | jenkins | v1.30.1 | 17 Jul 23 19:03 UTC | 17 Jul 23 19:03 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-848583                           | mount-start-2-848583 | jenkins | v1.30.1 | 17 Jul 23 19:03 UTC | 17 Jul 23 19:03 UTC |
	| start   | -p mount-start-2-848583                           | mount-start-2-848583 | jenkins | v1.30.1 | 17 Jul 23 19:03 UTC | 17 Jul 23 19:03 UTC |
	| ssh     | mount-start-2-848583 ssh -- ls                    | mount-start-2-848583 | jenkins | v1.30.1 | 17 Jul 23 19:03 UTC | 17 Jul 23 19:03 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-848583                           | mount-start-2-848583 | jenkins | v1.30.1 | 17 Jul 23 19:03 UTC | 17 Jul 23 19:03 UTC |
	| delete  | -p mount-start-1-830426                           | mount-start-1-830426 | jenkins | v1.30.1 | 17 Jul 23 19:03 UTC | 17 Jul 23 19:03 UTC |
	| start   | -p multinode-549411                               | multinode-549411     | jenkins | v1.30.1 | 17 Jul 23 19:03 UTC | 17 Jul 23 19:05 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-549411 -- apply -f                   | multinode-549411     | jenkins | v1.30.1 | 17 Jul 23 19:05 UTC | 17 Jul 23 19:05 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-549411 -- rollout                    | multinode-549411     | jenkins | v1.30.1 | 17 Jul 23 19:05 UTC | 17 Jul 23 19:05 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-549411 -- get pods -o                | multinode-549411     | jenkins | v1.30.1 | 17 Jul 23 19:05 UTC | 17 Jul 23 19:05 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-549411 -- get pods -o                | multinode-549411     | jenkins | v1.30.1 | 17 Jul 23 19:05 UTC | 17 Jul 23 19:05 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-549411 -- exec                       | multinode-549411     | jenkins | v1.30.1 | 17 Jul 23 19:05 UTC | 17 Jul 23 19:05 UTC |
	|         | busybox-67b7f59bb-8mh6q --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-549411 -- exec                       | multinode-549411     | jenkins | v1.30.1 | 17 Jul 23 19:05 UTC | 17 Jul 23 19:05 UTC |
	|         | busybox-67b7f59bb-rww5s --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-549411 -- exec                       | multinode-549411     | jenkins | v1.30.1 | 17 Jul 23 19:05 UTC | 17 Jul 23 19:05 UTC |
	|         | busybox-67b7f59bb-8mh6q --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-549411 -- exec                       | multinode-549411     | jenkins | v1.30.1 | 17 Jul 23 19:05 UTC | 17 Jul 23 19:05 UTC |
	|         | busybox-67b7f59bb-rww5s --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-549411 -- exec                       | multinode-549411     | jenkins | v1.30.1 | 17 Jul 23 19:05 UTC | 17 Jul 23 19:05 UTC |
	|         | busybox-67b7f59bb-8mh6q -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-549411 -- exec                       | multinode-549411     | jenkins | v1.30.1 | 17 Jul 23 19:05 UTC | 17 Jul 23 19:05 UTC |
	|         | busybox-67b7f59bb-rww5s -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-549411 -- get pods -o                | multinode-549411     | jenkins | v1.30.1 | 17 Jul 23 19:05 UTC | 17 Jul 23 19:05 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-549411 -- exec                       | multinode-549411     | jenkins | v1.30.1 | 17 Jul 23 19:05 UTC | 17 Jul 23 19:05 UTC |
	|         | busybox-67b7f59bb-8mh6q                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-549411 -- exec                       | multinode-549411     | jenkins | v1.30.1 | 17 Jul 23 19:05 UTC |                     |
	|         | busybox-67b7f59bb-8mh6q -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-549411 -- exec                       | multinode-549411     | jenkins | v1.30.1 | 17 Jul 23 19:05 UTC | 17 Jul 23 19:05 UTC |
	|         | busybox-67b7f59bb-rww5s                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-549411 -- exec                       | multinode-549411     | jenkins | v1.30.1 | 17 Jul 23 19:05 UTC |                     |
	|         | busybox-67b7f59bb-rww5s -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 19:03:37
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 19:03:37.389055  228393 out.go:296] Setting OutFile to fd 1 ...
	I0717 19:03:37.389170  228393 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:03:37.389183  228393 out.go:309] Setting ErrFile to fd 2...
	I0717 19:03:37.389188  228393 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:03:37.389386  228393 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-138069/.minikube/bin
	I0717 19:03:37.389989  228393 out.go:303] Setting JSON to false
	I0717 19:03:37.391192  228393 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":13568,"bootTime":1689607049,"procs":581,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 19:03:37.391255  228393 start.go:138] virtualization: kvm guest
	I0717 19:03:37.394148  228393 out.go:177] * [multinode-549411] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 19:03:37.395923  228393 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 19:03:37.395918  228393 notify.go:220] Checking for updates...
	I0717 19:03:37.397628  228393 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:03:37.399428  228393 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-138069/kubeconfig
	I0717 19:03:37.400975  228393 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-138069/.minikube
	I0717 19:03:37.402978  228393 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 19:03:37.404765  228393 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 19:03:37.406717  228393 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 19:03:37.427951  228393 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 19:03:37.428068  228393 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 19:03:37.481472  228393 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-07-17 19:03:37.471899222 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 19:03:37.481589  228393 docker.go:294] overlay module found
	I0717 19:03:37.483889  228393 out.go:177] * Using the docker driver based on user configuration
	I0717 19:03:37.485486  228393 start.go:298] selected driver: docker
	I0717 19:03:37.485505  228393 start.go:880] validating driver "docker" against <nil>
	I0717 19:03:37.485516  228393 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 19:03:37.486248  228393 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 19:03:37.537677  228393 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-07-17 19:03:37.529376121 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 19:03:37.537898  228393 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 19:03:37.538109  228393 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 19:03:37.540356  228393 out.go:177] * Using Docker driver with root privileges
	I0717 19:03:37.541848  228393 cni.go:84] Creating CNI manager for ""
	I0717 19:03:37.541865  228393 cni.go:137] 0 nodes found, recommending kindnet
	I0717 19:03:37.541873  228393 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 19:03:37.541882  228393 start_flags.go:319] config:
	{Name:multinode-549411 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-549411 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:03:37.543584  228393 out.go:177] * Starting control plane node multinode-549411 in cluster multinode-549411
	I0717 19:03:37.545263  228393 cache.go:122] Beginning downloading kic base image for docker with crio
	I0717 19:03:37.547056  228393 out.go:177] * Pulling base image ...
	I0717 19:03:37.548657  228393 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 19:03:37.548703  228393 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16890-138069/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0717 19:03:37.548714  228393 cache.go:57] Caching tarball of preloaded images
	I0717 19:03:37.548754  228393 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 19:03:37.548798  228393 preload.go:174] Found /home/jenkins/minikube-integration/16890-138069/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 19:03:37.548807  228393 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 19:03:37.549104  228393 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/config.json ...
	I0717 19:03:37.549125  228393 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/config.json: {Name:mk823fde19bcb21916ad624daccbc0d6efd40785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:03:37.564588  228393 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 19:03:37.564612  228393 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0717 19:03:37.564630  228393 cache.go:195] Successfully downloaded all kic artifacts
	I0717 19:03:37.564667  228393 start.go:365] acquiring machines lock for multinode-549411: {Name:mk2cb3935a4cd2c96f9b74854c4cb7909e4f8f1d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:03:37.564768  228393 start.go:369] acquired machines lock for "multinode-549411" in 80.701µs
	I0717 19:03:37.564796  228393 start.go:93] Provisioning new machine with config: &{Name:multinode-549411 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-549411 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:03:37.564886  228393 start.go:125] createHost starting for "" (driver="docker")
	I0717 19:03:37.567814  228393 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0717 19:03:37.568063  228393 start.go:159] libmachine.API.Create for "multinode-549411" (driver="docker")
	I0717 19:03:37.568105  228393 client.go:168] LocalClient.Create starting
	I0717 19:03:37.568165  228393 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem
	I0717 19:03:37.568197  228393 main.go:141] libmachine: Decoding PEM data...
	I0717 19:03:37.568213  228393 main.go:141] libmachine: Parsing certificate...
	I0717 19:03:37.568266  228393 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem
	I0717 19:03:37.568287  228393 main.go:141] libmachine: Decoding PEM data...
	I0717 19:03:37.568296  228393 main.go:141] libmachine: Parsing certificate...
	I0717 19:03:37.568572  228393 cli_runner.go:164] Run: docker network inspect multinode-549411 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 19:03:37.584535  228393 cli_runner.go:211] docker network inspect multinode-549411 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 19:03:37.584607  228393 network_create.go:281] running [docker network inspect multinode-549411] to gather additional debugging logs...
	I0717 19:03:37.584625  228393 cli_runner.go:164] Run: docker network inspect multinode-549411
	W0717 19:03:37.600288  228393 cli_runner.go:211] docker network inspect multinode-549411 returned with exit code 1
	I0717 19:03:37.600337  228393 network_create.go:284] error running [docker network inspect multinode-549411]: docker network inspect multinode-549411: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-549411 not found
	I0717 19:03:37.600351  228393 network_create.go:286] output of [docker network inspect multinode-549411]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-549411 not found
	
	** /stderr **
	I0717 19:03:37.600415  228393 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 19:03:37.617021  228393 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1070ebc8dfdf IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:9e:80:fb:8c} reservation:<nil>}
	I0717 19:03:37.617676  228393 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00128f0e0}
	I0717 19:03:37.617710  228393 network_create.go:123] attempt to create docker network multinode-549411 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0717 19:03:37.617760  228393 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-549411 multinode-549411
	I0717 19:03:37.669695  228393 network_create.go:107] docker network multinode-549411 192.168.58.0/24 created
	I0717 19:03:37.669733  228393 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-549411" container
	I0717 19:03:37.669804  228393 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 19:03:37.685378  228393 cli_runner.go:164] Run: docker volume create multinode-549411 --label name.minikube.sigs.k8s.io=multinode-549411 --label created_by.minikube.sigs.k8s.io=true
	I0717 19:03:37.703319  228393 oci.go:103] Successfully created a docker volume multinode-549411
	I0717 19:03:37.703407  228393 cli_runner.go:164] Run: docker run --rm --name multinode-549411-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-549411 --entrypoint /usr/bin/test -v multinode-549411:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0717 19:03:38.227273  228393 oci.go:107] Successfully prepared a docker volume multinode-549411
	I0717 19:03:38.227306  228393 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 19:03:38.227333  228393 kic.go:190] Starting extracting preloaded images to volume ...
	I0717 19:03:38.227405  228393 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16890-138069/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-549411:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0717 19:03:43.090675  228393 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16890-138069/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-549411:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.863218589s)
	I0717 19:03:43.090716  228393 kic.go:199] duration metric: took 4.863377 seconds to extract preloaded images to volume
	W0717 19:03:43.090870  228393 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0717 19:03:43.090982  228393 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 19:03:43.143319  228393 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-549411 --name multinode-549411 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-549411 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-549411 --network multinode-549411 --ip 192.168.58.2 --volume multinode-549411:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 19:03:43.471907  228393 cli_runner.go:164] Run: docker container inspect multinode-549411 --format={{.State.Running}}
	I0717 19:03:43.490782  228393 cli_runner.go:164] Run: docker container inspect multinode-549411 --format={{.State.Status}}
	I0717 19:03:43.510402  228393 cli_runner.go:164] Run: docker exec multinode-549411 stat /var/lib/dpkg/alternatives/iptables
	I0717 19:03:43.568160  228393 oci.go:144] the created container "multinode-549411" has a running status.
	I0717 19:03:43.568191  228393 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16890-138069/.minikube/machines/multinode-549411/id_rsa...
	I0717 19:03:43.634066  228393 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/machines/multinode-549411/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0717 19:03:43.634116  228393 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16890-138069/.minikube/machines/multinode-549411/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 19:03:43.654420  228393 cli_runner.go:164] Run: docker container inspect multinode-549411 --format={{.State.Status}}
	I0717 19:03:43.672116  228393 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 19:03:43.672137  228393 kic_runner.go:114] Args: [docker exec --privileged multinode-549411 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 19:03:43.733207  228393 cli_runner.go:164] Run: docker container inspect multinode-549411 --format={{.State.Status}}
	I0717 19:03:43.749812  228393 machine.go:88] provisioning docker machine ...
	I0717 19:03:43.749850  228393 ubuntu.go:169] provisioning hostname "multinode-549411"
	I0717 19:03:43.749928  228393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-549411
	I0717 19:03:43.766806  228393 main.go:141] libmachine: Using SSH client type: native
	I0717 19:03:43.767408  228393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0717 19:03:43.767436  228393 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-549411 && echo "multinode-549411" | sudo tee /etc/hostname
	I0717 19:03:43.768116  228393 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40120->127.0.0.1:32847: read: connection reset by peer
	I0717 19:03:46.902634  228393 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-549411
	
	I0717 19:03:46.902721  228393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-549411
	I0717 19:03:46.918800  228393 main.go:141] libmachine: Using SSH client type: native
	I0717 19:03:46.919219  228393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0717 19:03:46.919238  228393 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-549411' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-549411/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-549411' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:03:47.048164  228393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:03:47.048193  228393 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16890-138069/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-138069/.minikube}
	I0717 19:03:47.048223  228393 ubuntu.go:177] setting up certificates
	I0717 19:03:47.048233  228393 provision.go:83] configureAuth start
	I0717 19:03:47.048284  228393 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-549411
	I0717 19:03:47.064593  228393 provision.go:138] copyHostCerts
	I0717 19:03:47.064640  228393 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16890-138069/.minikube/ca.pem
	I0717 19:03:47.064684  228393 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-138069/.minikube/ca.pem, removing ...
	I0717 19:03:47.064694  228393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-138069/.minikube/ca.pem
	I0717 19:03:47.064766  228393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-138069/.minikube/ca.pem (1078 bytes)
	I0717 19:03:47.064839  228393 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16890-138069/.minikube/cert.pem
	I0717 19:03:47.064856  228393 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-138069/.minikube/cert.pem, removing ...
	I0717 19:03:47.064862  228393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-138069/.minikube/cert.pem
	I0717 19:03:47.064887  228393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-138069/.minikube/cert.pem (1123 bytes)
	I0717 19:03:47.064929  228393 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16890-138069/.minikube/key.pem
	I0717 19:03:47.064944  228393 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-138069/.minikube/key.pem, removing ...
	I0717 19:03:47.064951  228393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-138069/.minikube/key.pem
	I0717 19:03:47.064970  228393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-138069/.minikube/key.pem (1675 bytes)
	I0717 19:03:47.065019  228393 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca-key.pem org=jenkins.multinode-549411 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-549411]
	I0717 19:03:47.183145  228393 provision.go:172] copyRemoteCerts
	I0717 19:03:47.183209  228393 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:03:47.183257  228393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-549411
	I0717 19:03:47.200564  228393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/multinode-549411/id_rsa Username:docker}
	I0717 19:03:47.292625  228393 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 19:03:47.292697  228393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:03:47.314335  228393 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 19:03:47.314399  228393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0717 19:03:47.335855  228393 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 19:03:47.335919  228393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:03:47.356633  228393 provision.go:86] duration metric: configureAuth took 308.386034ms
	I0717 19:03:47.356672  228393 ubuntu.go:193] setting minikube options for container-runtime
	I0717 19:03:47.356897  228393 config.go:182] Loaded profile config "multinode-549411": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:03:47.356999  228393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-549411
	I0717 19:03:47.372956  228393 main.go:141] libmachine: Using SSH client type: native
	I0717 19:03:47.373369  228393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0717 19:03:47.373387  228393 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:03:47.582242  228393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:03:47.582282  228393 machine.go:91] provisioned docker machine in 3.832446357s
	I0717 19:03:47.582294  228393 client.go:171] LocalClient.Create took 10.01418059s
	I0717 19:03:47.582321  228393 start.go:167] duration metric: libmachine.API.Create for "multinode-549411" took 10.014257847s
	I0717 19:03:47.582332  228393 start.go:300] post-start starting for "multinode-549411" (driver="docker")
	I0717 19:03:47.582349  228393 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:03:47.582470  228393 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:03:47.582525  228393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-549411
	I0717 19:03:47.600988  228393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/multinode-549411/id_rsa Username:docker}
	I0717 19:03:47.692797  228393 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:03:47.695883  228393 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.2 LTS"
	I0717 19:03:47.695905  228393 command_runner.go:130] > NAME="Ubuntu"
	I0717 19:03:47.695913  228393 command_runner.go:130] > VERSION_ID="22.04"
	I0717 19:03:47.695920  228393 command_runner.go:130] > VERSION="22.04.2 LTS (Jammy Jellyfish)"
	I0717 19:03:47.695927  228393 command_runner.go:130] > VERSION_CODENAME=jammy
	I0717 19:03:47.695934  228393 command_runner.go:130] > ID=ubuntu
	I0717 19:03:47.695941  228393 command_runner.go:130] > ID_LIKE=debian
	I0717 19:03:47.695953  228393 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0717 19:03:47.695965  228393 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0717 19:03:47.695995  228393 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0717 19:03:47.696011  228393 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0717 19:03:47.696019  228393 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0717 19:03:47.696107  228393 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 19:03:47.696143  228393 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 19:03:47.696165  228393 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 19:03:47.696178  228393 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 19:03:47.696199  228393 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-138069/.minikube/addons for local assets ...
	I0717 19:03:47.696269  228393 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-138069/.minikube/files for local assets ...
	I0717 19:03:47.696363  228393 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem -> 1448222.pem in /etc/ssl/certs
	I0717 19:03:47.696376  228393 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem -> /etc/ssl/certs/1448222.pem
	I0717 19:03:47.696475  228393 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:03:47.704161  228393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem --> /etc/ssl/certs/1448222.pem (1708 bytes)
	I0717 19:03:47.724984  228393 start.go:303] post-start completed in 142.634462ms
	I0717 19:03:47.725298  228393 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-549411
	I0717 19:03:47.741431  228393 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/config.json ...
	I0717 19:03:47.741726  228393 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 19:03:47.741784  228393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-549411
	I0717 19:03:47.758039  228393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/multinode-549411/id_rsa Username:docker}
	I0717 19:03:47.844777  228393 command_runner.go:130] > 25%!
	(MISSING)I0717 19:03:47.844857  228393 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 19:03:47.848817  228393 command_runner.go:130] > 219G
	I0717 19:03:47.849132  228393 start.go:128] duration metric: createHost completed in 10.284233228s
	I0717 19:03:47.849157  228393 start.go:83] releasing machines lock for "multinode-549411", held for 10.284376807s
	I0717 19:03:47.849226  228393 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-549411
	I0717 19:03:47.866981  228393 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:03:47.867076  228393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-549411
	I0717 19:03:47.866981  228393 ssh_runner.go:195] Run: cat /version.json
	I0717 19:03:47.867145  228393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-549411
	I0717 19:03:47.885529  228393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/multinode-549411/id_rsa Username:docker}
	I0717 19:03:47.885619  228393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/multinode-549411/id_rsa Username:docker}
	I0717 19:03:47.971534  228393 command_runner.go:130] > {"iso_version": "v1.30.1-1689243309-16875", "kicbase_version": "v0.0.40", "minikube_version": "v1.31.0", "commit": "085433cd1b734742870dea5be8f9ee2ce4c54148"}
	I0717 19:03:48.059151  228393 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	W0717 19:03:48.061510  228393 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 19:03:48.061605  228393 ssh_runner.go:195] Run: systemctl --version
	I0717 19:03:48.065772  228393 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.9)
	I0717 19:03:48.065818  228393 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0717 19:03:48.065896  228393 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:03:48.202399  228393 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 19:03:48.206712  228393 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0717 19:03:48.206739  228393 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0717 19:03:48.206747  228393 command_runner.go:130] > Device: 37h/55d	Inode: 559415      Links: 1
	I0717 19:03:48.206753  228393 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 19:03:48.206762  228393 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0717 19:03:48.206768  228393 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0717 19:03:48.206775  228393 command_runner.go:130] > Change: 2023-07-17 18:45:35.686698013 +0000
	I0717 19:03:48.206780  228393 command_runner.go:130] >  Birth: 2023-07-17 18:45:35.686698013 +0000
	I0717 19:03:48.206829  228393 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:03:48.224143  228393 cni.go:227] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0717 19:03:48.224207  228393 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:03:48.251122  228393 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0717 19:03:48.251180  228393 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0717 19:03:48.251188  228393 start.go:469] detecting cgroup driver to use...
	I0717 19:03:48.251219  228393 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 19:03:48.251265  228393 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:03:48.265504  228393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:03:48.275679  228393 docker.go:196] disabling cri-docker service (if available) ...
	I0717 19:03:48.275740  228393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:03:48.287936  228393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:03:48.300431  228393 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:03:48.381597  228393 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:03:48.395053  228393 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0717 19:03:48.462249  228393 docker.go:212] disabling docker service ...
	I0717 19:03:48.462323  228393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:03:48.480486  228393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:03:48.491500  228393 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:03:48.565474  228393 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0717 19:03:48.565558  228393 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:03:48.649693  228393 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0717 19:03:48.649785  228393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:03:48.660348  228393 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:03:48.674356  228393 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0717 19:03:48.675153  228393 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:03:48.675222  228393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:03:48.684125  228393 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:03:48.684206  228393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:03:48.693010  228393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:03:48.701811  228393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:03:48.710346  228393 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:03:48.718257  228393 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:03:48.724820  228393 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0717 19:03:48.725421  228393 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:03:48.732520  228393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:03:48.807699  228393 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:03:48.911773  228393 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:03:48.911832  228393 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:03:48.915147  228393 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0717 19:03:48.915174  228393 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0717 19:03:48.915184  228393 command_runner.go:130] > Device: 40h/64d	Inode: 186         Links: 1
	I0717 19:03:48.915191  228393 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 19:03:48.915201  228393 command_runner.go:130] > Access: 2023-07-17 19:03:48.897801240 +0000
	I0717 19:03:48.915206  228393 command_runner.go:130] > Modify: 2023-07-17 19:03:48.897801240 +0000
	I0717 19:03:48.915211  228393 command_runner.go:130] > Change: 2023-07-17 19:03:48.897801240 +0000
	I0717 19:03:48.915215  228393 command_runner.go:130] >  Birth: -
	I0717 19:03:48.915236  228393 start.go:537] Will wait 60s for crictl version
	I0717 19:03:48.915279  228393 ssh_runner.go:195] Run: which crictl
	I0717 19:03:48.918213  228393 command_runner.go:130] > /usr/bin/crictl
	I0717 19:03:48.918338  228393 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:03:48.950100  228393 command_runner.go:130] > Version:  0.1.0
	I0717 19:03:48.950126  228393 command_runner.go:130] > RuntimeName:  cri-o
	I0717 19:03:48.950131  228393 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0717 19:03:48.950136  228393 command_runner.go:130] > RuntimeApiVersion:  v1
	I0717 19:03:48.950154  228393 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0717 19:03:48.950226  228393 ssh_runner.go:195] Run: crio --version
	I0717 19:03:48.981900  228393 command_runner.go:130] > crio version 1.24.6
	I0717 19:03:48.981925  228393 command_runner.go:130] > Version:          1.24.6
	I0717 19:03:48.981937  228393 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0717 19:03:48.981944  228393 command_runner.go:130] > GitTreeState:     clean
	I0717 19:03:48.981952  228393 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0717 19:03:48.981960  228393 command_runner.go:130] > GoVersion:        go1.18.2
	I0717 19:03:48.981966  228393 command_runner.go:130] > Compiler:         gc
	I0717 19:03:48.981974  228393 command_runner.go:130] > Platform:         linux/amd64
	I0717 19:03:48.981982  228393 command_runner.go:130] > Linkmode:         dynamic
	I0717 19:03:48.981994  228393 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0717 19:03:48.982004  228393 command_runner.go:130] > SeccompEnabled:   true
	I0717 19:03:48.982011  228393 command_runner.go:130] > AppArmorEnabled:  false
	I0717 19:03:48.983788  228393 ssh_runner.go:195] Run: crio --version
	I0717 19:03:49.016155  228393 command_runner.go:130] > crio version 1.24.6
	I0717 19:03:49.016175  228393 command_runner.go:130] > Version:          1.24.6
	I0717 19:03:49.016182  228393 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0717 19:03:49.016186  228393 command_runner.go:130] > GitTreeState:     clean
	I0717 19:03:49.016192  228393 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0717 19:03:49.016197  228393 command_runner.go:130] > GoVersion:        go1.18.2
	I0717 19:03:49.016201  228393 command_runner.go:130] > Compiler:         gc
	I0717 19:03:49.016205  228393 command_runner.go:130] > Platform:         linux/amd64
	I0717 19:03:49.016210  228393 command_runner.go:130] > Linkmode:         dynamic
	I0717 19:03:49.016226  228393 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0717 19:03:49.016235  228393 command_runner.go:130] > SeccompEnabled:   true
	I0717 19:03:49.016239  228393 command_runner.go:130] > AppArmorEnabled:  false
	I0717 19:03:49.020428  228393 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.6 ...
	I0717 19:03:49.022111  228393 cli_runner.go:164] Run: docker network inspect multinode-549411 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 19:03:49.038310  228393 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0717 19:03:49.041839  228393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:03:49.051916  228393 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 19:03:49.051998  228393 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:03:49.101745  228393 command_runner.go:130] > {
	I0717 19:03:49.101772  228393 command_runner.go:130] >   "images": [
	I0717 19:03:49.101779  228393 command_runner.go:130] >     {
	I0717 19:03:49.101790  228393 command_runner.go:130] >       "id": "b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da",
	I0717 19:03:49.101799  228393 command_runner.go:130] >       "repoTags": [
	I0717 19:03:49.101808  228393 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0717 19:03:49.101814  228393 command_runner.go:130] >       ],
	I0717 19:03:49.101821  228393 command_runner.go:130] >       "repoDigests": [
	I0717 19:03:49.101832  228393 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974",
	I0717 19:03:49.101838  228393 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"
	I0717 19:03:49.101842  228393 command_runner.go:130] >       ],
	I0717 19:03:49.101847  228393 command_runner.go:130] >       "size": "65249302",
	I0717 19:03:49.101850  228393 command_runner.go:130] >       "uid": null,
	I0717 19:03:49.101854  228393 command_runner.go:130] >       "username": "",
	I0717 19:03:49.101862  228393 command_runner.go:130] >       "spec": null,
	I0717 19:03:49.101867  228393 command_runner.go:130] >       "pinned": false
	I0717 19:03:49.101871  228393 command_runner.go:130] >     },
	I0717 19:03:49.101878  228393 command_runner.go:130] >     {
	I0717 19:03:49.101884  228393 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0717 19:03:49.101888  228393 command_runner.go:130] >       "repoTags": [
	I0717 19:03:49.101893  228393 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0717 19:03:49.101899  228393 command_runner.go:130] >       ],
	I0717 19:03:49.101904  228393 command_runner.go:130] >       "repoDigests": [
	I0717 19:03:49.101911  228393 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0717 19:03:49.101920  228393 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0717 19:03:49.101926  228393 command_runner.go:130] >       ],
	I0717 19:03:49.101933  228393 command_runner.go:130] >       "size": "31470524",
	I0717 19:03:49.101940  228393 command_runner.go:130] >       "uid": null,
	I0717 19:03:49.101944  228393 command_runner.go:130] >       "username": "",
	I0717 19:03:49.101948  228393 command_runner.go:130] >       "spec": null,
	I0717 19:03:49.101952  228393 command_runner.go:130] >       "pinned": false
	I0717 19:03:49.101959  228393 command_runner.go:130] >     },
	I0717 19:03:49.101962  228393 command_runner.go:130] >     {
	I0717 19:03:49.101968  228393 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0717 19:03:49.101975  228393 command_runner.go:130] >       "repoTags": [
	I0717 19:03:49.101981  228393 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0717 19:03:49.101988  228393 command_runner.go:130] >       ],
	I0717 19:03:49.101993  228393 command_runner.go:130] >       "repoDigests": [
	I0717 19:03:49.102002  228393 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0717 19:03:49.102009  228393 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0717 19:03:49.102015  228393 command_runner.go:130] >       ],
	I0717 19:03:49.102019  228393 command_runner.go:130] >       "size": "53621675",
	I0717 19:03:49.102023  228393 command_runner.go:130] >       "uid": null,
	I0717 19:03:49.102027  228393 command_runner.go:130] >       "username": "",
	I0717 19:03:49.102031  228393 command_runner.go:130] >       "spec": null,
	I0717 19:03:49.102035  228393 command_runner.go:130] >       "pinned": false
	I0717 19:03:49.102041  228393 command_runner.go:130] >     },
	I0717 19:03:49.102045  228393 command_runner.go:130] >     {
	I0717 19:03:49.102050  228393 command_runner.go:130] >       "id": "86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681",
	I0717 19:03:49.102057  228393 command_runner.go:130] >       "repoTags": [
	I0717 19:03:49.102063  228393 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.7-0"
	I0717 19:03:49.102069  228393 command_runner.go:130] >       ],
	I0717 19:03:49.102073  228393 command_runner.go:130] >       "repoDigests": [
	I0717 19:03:49.102081  228393 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83",
	I0717 19:03:49.102090  228393 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9"
	I0717 19:03:49.102097  228393 command_runner.go:130] >       ],
	I0717 19:03:49.102103  228393 command_runner.go:130] >       "size": "297083935",
	I0717 19:03:49.102107  228393 command_runner.go:130] >       "uid": {
	I0717 19:03:49.102114  228393 command_runner.go:130] >         "value": "0"
	I0717 19:03:49.102118  228393 command_runner.go:130] >       },
	I0717 19:03:49.102134  228393 command_runner.go:130] >       "username": "",
	I0717 19:03:49.102140  228393 command_runner.go:130] >       "spec": null,
	I0717 19:03:49.102144  228393 command_runner.go:130] >       "pinned": false
	I0717 19:03:49.102148  228393 command_runner.go:130] >     },
	I0717 19:03:49.102151  228393 command_runner.go:130] >     {
	I0717 19:03:49.102159  228393 command_runner.go:130] >       "id": "08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a",
	I0717 19:03:49.102163  228393 command_runner.go:130] >       "repoTags": [
	I0717 19:03:49.102169  228393 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.27.3"
	I0717 19:03:49.102184  228393 command_runner.go:130] >       ],
	I0717 19:03:49.102191  228393 command_runner.go:130] >       "repoDigests": [
	I0717 19:03:49.102198  228393 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb",
	I0717 19:03:49.102210  228393 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0"
	I0717 19:03:49.102213  228393 command_runner.go:130] >       ],
	I0717 19:03:49.102219  228393 command_runner.go:130] >       "size": "122065872",
	I0717 19:03:49.102223  228393 command_runner.go:130] >       "uid": {
	I0717 19:03:49.102230  228393 command_runner.go:130] >         "value": "0"
	I0717 19:03:49.102234  228393 command_runner.go:130] >       },
	I0717 19:03:49.102238  228393 command_runner.go:130] >       "username": "",
	I0717 19:03:49.102242  228393 command_runner.go:130] >       "spec": null,
	I0717 19:03:49.102246  228393 command_runner.go:130] >       "pinned": false
	I0717 19:03:49.102251  228393 command_runner.go:130] >     },
	I0717 19:03:49.102255  228393 command_runner.go:130] >     {
	I0717 19:03:49.102262  228393 command_runner.go:130] >       "id": "7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f",
	I0717 19:03:49.102267  228393 command_runner.go:130] >       "repoTags": [
	I0717 19:03:49.102272  228393 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.27.3"
	I0717 19:03:49.102278  228393 command_runner.go:130] >       ],
	I0717 19:03:49.102283  228393 command_runner.go:130] >       "repoDigests": [
	I0717 19:03:49.102291  228393 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e",
	I0717 19:03:49.102301  228393 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:d3bdc20876edfaa4894cf8464dc98592385a43cbc033b37846dccc2460c7bc06"
	I0717 19:03:49.102305  228393 command_runner.go:130] >       ],
	I0717 19:03:49.102309  228393 command_runner.go:130] >       "size": "113919286",
	I0717 19:03:49.102312  228393 command_runner.go:130] >       "uid": {
	I0717 19:03:49.102317  228393 command_runner.go:130] >         "value": "0"
	I0717 19:03:49.102320  228393 command_runner.go:130] >       },
	I0717 19:03:49.102324  228393 command_runner.go:130] >       "username": "",
	I0717 19:03:49.102330  228393 command_runner.go:130] >       "spec": null,
	I0717 19:03:49.102334  228393 command_runner.go:130] >       "pinned": false
	I0717 19:03:49.102340  228393 command_runner.go:130] >     },
	I0717 19:03:49.102343  228393 command_runner.go:130] >     {
	I0717 19:03:49.102349  228393 command_runner.go:130] >       "id": "5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c",
	I0717 19:03:49.102356  228393 command_runner.go:130] >       "repoTags": [
	I0717 19:03:49.102360  228393 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.27.3"
	I0717 19:03:49.102366  228393 command_runner.go:130] >       ],
	I0717 19:03:49.102370  228393 command_runner.go:130] >       "repoDigests": [
	I0717 19:03:49.102377  228393 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f",
	I0717 19:03:49.102386  228393 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699"
	I0717 19:03:49.102390  228393 command_runner.go:130] >       ],
	I0717 19:03:49.102394  228393 command_runner.go:130] >       "size": "72713623",
	I0717 19:03:49.102398  228393 command_runner.go:130] >       "uid": null,
	I0717 19:03:49.102402  228393 command_runner.go:130] >       "username": "",
	I0717 19:03:49.102405  228393 command_runner.go:130] >       "spec": null,
	I0717 19:03:49.102409  228393 command_runner.go:130] >       "pinned": false
	I0717 19:03:49.102413  228393 command_runner.go:130] >     },
	I0717 19:03:49.102416  228393 command_runner.go:130] >     {
	I0717 19:03:49.102422  228393 command_runner.go:130] >       "id": "41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a",
	I0717 19:03:49.102428  228393 command_runner.go:130] >       "repoTags": [
	I0717 19:03:49.102433  228393 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.27.3"
	I0717 19:03:49.102436  228393 command_runner.go:130] >       ],
	I0717 19:03:49.102440  228393 command_runner.go:130] >       "repoDigests": [
	I0717 19:03:49.102459  228393 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082",
	I0717 19:03:49.102468  228393 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8"
	I0717 19:03:49.102472  228393 command_runner.go:130] >       ],
	I0717 19:03:49.102489  228393 command_runner.go:130] >       "size": "59811126",
	I0717 19:03:49.102495  228393 command_runner.go:130] >       "uid": {
	I0717 19:03:49.102499  228393 command_runner.go:130] >         "value": "0"
	I0717 19:03:49.102505  228393 command_runner.go:130] >       },
	I0717 19:03:49.102509  228393 command_runner.go:130] >       "username": "",
	I0717 19:03:49.102513  228393 command_runner.go:130] >       "spec": null,
	I0717 19:03:49.102517  228393 command_runner.go:130] >       "pinned": false
	I0717 19:03:49.102522  228393 command_runner.go:130] >     },
	I0717 19:03:49.102526  228393 command_runner.go:130] >     {
	I0717 19:03:49.102534  228393 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0717 19:03:49.102538  228393 command_runner.go:130] >       "repoTags": [
	I0717 19:03:49.102543  228393 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0717 19:03:49.102547  228393 command_runner.go:130] >       ],
	I0717 19:03:49.102550  228393 command_runner.go:130] >       "repoDigests": [
	I0717 19:03:49.102557  228393 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0717 19:03:49.102566  228393 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0717 19:03:49.102569  228393 command_runner.go:130] >       ],
	I0717 19:03:49.102574  228393 command_runner.go:130] >       "size": "750414",
	I0717 19:03:49.102580  228393 command_runner.go:130] >       "uid": {
	I0717 19:03:49.102586  228393 command_runner.go:130] >         "value": "65535"
	I0717 19:03:49.102590  228393 command_runner.go:130] >       },
	I0717 19:03:49.102597  228393 command_runner.go:130] >       "username": "",
	I0717 19:03:49.102600  228393 command_runner.go:130] >       "spec": null,
	I0717 19:03:49.102604  228393 command_runner.go:130] >       "pinned": false
	I0717 19:03:49.102608  228393 command_runner.go:130] >     }
	I0717 19:03:49.102611  228393 command_runner.go:130] >   ]
	I0717 19:03:49.102614  228393 command_runner.go:130] > }
	I0717 19:03:49.102775  228393 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 19:03:49.102788  228393 crio.go:415] Images already preloaded, skipping extraction
	I0717 19:03:49.102829  228393 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:03:49.134214  228393 command_runner.go:130] > {
	I0717 19:03:49.134240  228393 command_runner.go:130] >   "images": [
	I0717 19:03:49.134248  228393 command_runner.go:130] >     {
	I0717 19:03:49.134261  228393 command_runner.go:130] >       "id": "b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da",
	I0717 19:03:49.134269  228393 command_runner.go:130] >       "repoTags": [
	I0717 19:03:49.134278  228393 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0717 19:03:49.134288  228393 command_runner.go:130] >       ],
	I0717 19:03:49.134298  228393 command_runner.go:130] >       "repoDigests": [
	I0717 19:03:49.134309  228393 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974",
	I0717 19:03:49.134318  228393 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"
	I0717 19:03:49.134324  228393 command_runner.go:130] >       ],
	I0717 19:03:49.134329  228393 command_runner.go:130] >       "size": "65249302",
	I0717 19:03:49.134335  228393 command_runner.go:130] >       "uid": null,
	I0717 19:03:49.134339  228393 command_runner.go:130] >       "username": "",
	I0717 19:03:49.134351  228393 command_runner.go:130] >       "spec": null,
	I0717 19:03:49.134355  228393 command_runner.go:130] >       "pinned": false
	I0717 19:03:49.134358  228393 command_runner.go:130] >     },
	I0717 19:03:49.134362  228393 command_runner.go:130] >     {
	I0717 19:03:49.134368  228393 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0717 19:03:49.134372  228393 command_runner.go:130] >       "repoTags": [
	I0717 19:03:49.134377  228393 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0717 19:03:49.134380  228393 command_runner.go:130] >       ],
	I0717 19:03:49.134384  228393 command_runner.go:130] >       "repoDigests": [
	I0717 19:03:49.134391  228393 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0717 19:03:49.134398  228393 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0717 19:03:49.134402  228393 command_runner.go:130] >       ],
	I0717 19:03:49.134409  228393 command_runner.go:130] >       "size": "31470524",
	I0717 19:03:49.134415  228393 command_runner.go:130] >       "uid": null,
	I0717 19:03:49.134420  228393 command_runner.go:130] >       "username": "",
	I0717 19:03:49.134426  228393 command_runner.go:130] >       "spec": null,
	I0717 19:03:49.134433  228393 command_runner.go:130] >       "pinned": false
	I0717 19:03:49.134438  228393 command_runner.go:130] >     },
	I0717 19:03:49.134442  228393 command_runner.go:130] >     {
	I0717 19:03:49.134450  228393 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0717 19:03:49.134454  228393 command_runner.go:130] >       "repoTags": [
	I0717 19:03:49.134463  228393 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0717 19:03:49.134469  228393 command_runner.go:130] >       ],
	I0717 19:03:49.134473  228393 command_runner.go:130] >       "repoDigests": [
	I0717 19:03:49.134482  228393 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0717 19:03:49.134500  228393 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0717 19:03:49.134506  228393 command_runner.go:130] >       ],
	I0717 19:03:49.134511  228393 command_runner.go:130] >       "size": "53621675",
	I0717 19:03:49.134517  228393 command_runner.go:130] >       "uid": null,
	I0717 19:03:49.134522  228393 command_runner.go:130] >       "username": "",
	I0717 19:03:49.134528  228393 command_runner.go:130] >       "spec": null,
	I0717 19:03:49.134532  228393 command_runner.go:130] >       "pinned": false
	I0717 19:03:49.134538  228393 command_runner.go:130] >     },
	I0717 19:03:49.134541  228393 command_runner.go:130] >     {
	I0717 19:03:49.134549  228393 command_runner.go:130] >       "id": "86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681",
	I0717 19:03:49.134555  228393 command_runner.go:130] >       "repoTags": [
	I0717 19:03:49.134560  228393 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.7-0"
	I0717 19:03:49.134566  228393 command_runner.go:130] >       ],
	I0717 19:03:49.134571  228393 command_runner.go:130] >       "repoDigests": [
	I0717 19:03:49.134580  228393 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83",
	I0717 19:03:49.134589  228393 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9"
	I0717 19:03:49.134598  228393 command_runner.go:130] >       ],
	I0717 19:03:49.134604  228393 command_runner.go:130] >       "size": "297083935",
	I0717 19:03:49.134608  228393 command_runner.go:130] >       "uid": {
	I0717 19:03:49.134612  228393 command_runner.go:130] >         "value": "0"
	I0717 19:03:49.134618  228393 command_runner.go:130] >       },
	I0717 19:03:49.134622  228393 command_runner.go:130] >       "username": "",
	I0717 19:03:49.134628  228393 command_runner.go:130] >       "spec": null,
	I0717 19:03:49.134632  228393 command_runner.go:130] >       "pinned": false
	I0717 19:03:49.134638  228393 command_runner.go:130] >     },
	I0717 19:03:49.134644  228393 command_runner.go:130] >     {
	I0717 19:03:49.134652  228393 command_runner.go:130] >       "id": "08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a",
	I0717 19:03:49.134658  228393 command_runner.go:130] >       "repoTags": [
	I0717 19:03:49.134663  228393 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.27.3"
	I0717 19:03:49.134672  228393 command_runner.go:130] >       ],
	I0717 19:03:49.134679  228393 command_runner.go:130] >       "repoDigests": [
	I0717 19:03:49.134686  228393 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb",
	I0717 19:03:49.134695  228393 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0"
	I0717 19:03:49.134699  228393 command_runner.go:130] >       ],
	I0717 19:03:49.134706  228393 command_runner.go:130] >       "size": "122065872",
	I0717 19:03:49.134710  228393 command_runner.go:130] >       "uid": {
	I0717 19:03:49.134716  228393 command_runner.go:130] >         "value": "0"
	I0717 19:03:49.134720  228393 command_runner.go:130] >       },
	I0717 19:03:49.134726  228393 command_runner.go:130] >       "username": "",
	I0717 19:03:49.134730  228393 command_runner.go:130] >       "spec": null,
	I0717 19:03:49.134737  228393 command_runner.go:130] >       "pinned": false
	I0717 19:03:49.134740  228393 command_runner.go:130] >     },
	I0717 19:03:49.134746  228393 command_runner.go:130] >     {
	I0717 19:03:49.134752  228393 command_runner.go:130] >       "id": "7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f",
	I0717 19:03:49.134758  228393 command_runner.go:130] >       "repoTags": [
	I0717 19:03:49.134764  228393 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.27.3"
	I0717 19:03:49.134774  228393 command_runner.go:130] >       ],
	I0717 19:03:49.134778  228393 command_runner.go:130] >       "repoDigests": [
	I0717 19:03:49.134785  228393 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e",
	I0717 19:03:49.134794  228393 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:d3bdc20876edfaa4894cf8464dc98592385a43cbc033b37846dccc2460c7bc06"
	I0717 19:03:49.134800  228393 command_runner.go:130] >       ],
	I0717 19:03:49.134805  228393 command_runner.go:130] >       "size": "113919286",
	I0717 19:03:49.134811  228393 command_runner.go:130] >       "uid": {
	I0717 19:03:49.134815  228393 command_runner.go:130] >         "value": "0"
	I0717 19:03:49.134821  228393 command_runner.go:130] >       },
	I0717 19:03:49.134825  228393 command_runner.go:130] >       "username": "",
	I0717 19:03:49.134831  228393 command_runner.go:130] >       "spec": null,
	I0717 19:03:49.134835  228393 command_runner.go:130] >       "pinned": false
	I0717 19:03:49.134840  228393 command_runner.go:130] >     },
	I0717 19:03:49.134844  228393 command_runner.go:130] >     {
	I0717 19:03:49.134852  228393 command_runner.go:130] >       "id": "5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c",
	I0717 19:03:49.134858  228393 command_runner.go:130] >       "repoTags": [
	I0717 19:03:49.134863  228393 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.27.3"
	I0717 19:03:49.134869  228393 command_runner.go:130] >       ],
	I0717 19:03:49.134873  228393 command_runner.go:130] >       "repoDigests": [
	I0717 19:03:49.134882  228393 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f",
	I0717 19:03:49.134891  228393 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699"
	I0717 19:03:49.134897  228393 command_runner.go:130] >       ],
	I0717 19:03:49.134902  228393 command_runner.go:130] >       "size": "72713623",
	I0717 19:03:49.134908  228393 command_runner.go:130] >       "uid": null,
	I0717 19:03:49.134913  228393 command_runner.go:130] >       "username": "",
	I0717 19:03:49.134919  228393 command_runner.go:130] >       "spec": null,
	I0717 19:03:49.134923  228393 command_runner.go:130] >       "pinned": false
	I0717 19:03:49.134929  228393 command_runner.go:130] >     },
	I0717 19:03:49.134933  228393 command_runner.go:130] >     {
	I0717 19:03:49.134943  228393 command_runner.go:130] >       "id": "41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a",
	I0717 19:03:49.134948  228393 command_runner.go:130] >       "repoTags": [
	I0717 19:03:49.134953  228393 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.27.3"
	I0717 19:03:49.134959  228393 command_runner.go:130] >       ],
	I0717 19:03:49.134963  228393 command_runner.go:130] >       "repoDigests": [
	I0717 19:03:49.134980  228393 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082",
	I0717 19:03:49.134989  228393 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8"
	I0717 19:03:49.134998  228393 command_runner.go:130] >       ],
	I0717 19:03:49.135004  228393 command_runner.go:130] >       "size": "59811126",
	I0717 19:03:49.135008  228393 command_runner.go:130] >       "uid": {
	I0717 19:03:49.135014  228393 command_runner.go:130] >         "value": "0"
	I0717 19:03:49.135020  228393 command_runner.go:130] >       },
	I0717 19:03:49.135027  228393 command_runner.go:130] >       "username": "",
	I0717 19:03:49.135031  228393 command_runner.go:130] >       "spec": null,
	I0717 19:03:49.135038  228393 command_runner.go:130] >       "pinned": false
	I0717 19:03:49.135041  228393 command_runner.go:130] >     },
	I0717 19:03:49.135047  228393 command_runner.go:130] >     {
	I0717 19:03:49.135053  228393 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0717 19:03:49.135059  228393 command_runner.go:130] >       "repoTags": [
	I0717 19:03:49.135064  228393 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0717 19:03:49.135069  228393 command_runner.go:130] >       ],
	I0717 19:03:49.135074  228393 command_runner.go:130] >       "repoDigests": [
	I0717 19:03:49.135083  228393 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0717 19:03:49.135093  228393 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0717 19:03:49.135098  228393 command_runner.go:130] >       ],
	I0717 19:03:49.135103  228393 command_runner.go:130] >       "size": "750414",
	I0717 19:03:49.135108  228393 command_runner.go:130] >       "uid": {
	I0717 19:03:49.135112  228393 command_runner.go:130] >         "value": "65535"
	I0717 19:03:49.135116  228393 command_runner.go:130] >       },
	I0717 19:03:49.135123  228393 command_runner.go:130] >       "username": "",
	I0717 19:03:49.135127  228393 command_runner.go:130] >       "spec": null,
	I0717 19:03:49.135134  228393 command_runner.go:130] >       "pinned": false
	I0717 19:03:49.135137  228393 command_runner.go:130] >     }
	I0717 19:03:49.135143  228393 command_runner.go:130] >   ]
	I0717 19:03:49.135147  228393 command_runner.go:130] > }
	I0717 19:03:49.135252  228393 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 19:03:49.135263  228393 cache_images.go:84] Images are preloaded, skipping loading
	I0717 19:03:49.135313  228393 ssh_runner.go:195] Run: crio config
	I0717 19:03:49.171906  228393 command_runner.go:130] ! time="2023-07-17 19:03:49.171437726Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0717 19:03:49.171944  228393 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0717 19:03:49.176991  228393 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0717 19:03:49.177017  228393 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0717 19:03:49.177023  228393 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0717 19:03:49.177030  228393 command_runner.go:130] > #
	I0717 19:03:49.177043  228393 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0717 19:03:49.177054  228393 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0717 19:03:49.177071  228393 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0717 19:03:49.177086  228393 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0717 19:03:49.177092  228393 command_runner.go:130] > # reload'.
	I0717 19:03:49.177099  228393 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0717 19:03:49.177107  228393 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0717 19:03:49.177116  228393 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0717 19:03:49.177123  228393 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0717 19:03:49.177130  228393 command_runner.go:130] > [crio]
	I0717 19:03:49.177144  228393 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0717 19:03:49.177156  228393 command_runner.go:130] > # containers images, in this directory.
	I0717 19:03:49.177170  228393 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0717 19:03:49.177183  228393 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0717 19:03:49.177192  228393 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0717 19:03:49.177201  228393 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0717 19:03:49.177209  228393 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0717 19:03:49.177219  228393 command_runner.go:130] > # storage_driver = "vfs"
	I0717 19:03:49.177232  228393 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0717 19:03:49.177245  228393 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0717 19:03:49.177255  228393 command_runner.go:130] > # storage_option = [
	I0717 19:03:49.177261  228393 command_runner.go:130] > # ]
	I0717 19:03:49.177275  228393 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0717 19:03:49.177288  228393 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0717 19:03:49.177296  228393 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0717 19:03:49.177304  228393 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0717 19:03:49.177318  228393 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0717 19:03:49.177329  228393 command_runner.go:130] > # always happen on a node reboot
	I0717 19:03:49.177340  228393 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0717 19:03:49.177353  228393 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0717 19:03:49.177365  228393 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0717 19:03:49.177379  228393 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0717 19:03:49.177389  228393 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0717 19:03:49.177405  228393 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0717 19:03:49.177422  228393 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0717 19:03:49.177432  228393 command_runner.go:130] > # internal_wipe = true
	I0717 19:03:49.177444  228393 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0717 19:03:49.177457  228393 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0717 19:03:49.177474  228393 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0717 19:03:49.177487  228393 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0717 19:03:49.177500  228393 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0717 19:03:49.177509  228393 command_runner.go:130] > [crio.api]
	I0717 19:03:49.177522  228393 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0717 19:03:49.177533  228393 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0717 19:03:49.177547  228393 command_runner.go:130] > # IP address on which the stream server will listen.
	I0717 19:03:49.177558  228393 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0717 19:03:49.177572  228393 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0717 19:03:49.177584  228393 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0717 19:03:49.177594  228393 command_runner.go:130] > # stream_port = "0"
	I0717 19:03:49.177605  228393 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0717 19:03:49.177615  228393 command_runner.go:130] > # stream_enable_tls = false
	I0717 19:03:49.177628  228393 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0717 19:03:49.177635  228393 command_runner.go:130] > # stream_idle_timeout = ""
	I0717 19:03:49.177645  228393 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0717 19:03:49.177659  228393 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0717 19:03:49.177669  228393 command_runner.go:130] > # minutes.
	I0717 19:03:49.177679  228393 command_runner.go:130] > # stream_tls_cert = ""
	I0717 19:03:49.177706  228393 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0717 19:03:49.177719  228393 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0717 19:03:49.177726  228393 command_runner.go:130] > # stream_tls_key = ""
	I0717 19:03:49.177734  228393 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0717 19:03:49.177752  228393 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0717 19:03:49.177763  228393 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0717 19:03:49.177772  228393 command_runner.go:130] > # stream_tls_ca = ""
	I0717 19:03:49.177782  228393 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0717 19:03:49.177791  228393 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0717 19:03:49.177801  228393 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0717 19:03:49.177811  228393 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0717 19:03:49.177840  228393 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0717 19:03:49.177856  228393 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0717 19:03:49.177863  228393 command_runner.go:130] > [crio.runtime]
	I0717 19:03:49.177876  228393 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0717 19:03:49.177888  228393 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0717 19:03:49.177895  228393 command_runner.go:130] > # "nofile=1024:2048"
	I0717 19:03:49.177911  228393 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0717 19:03:49.177922  228393 command_runner.go:130] > # default_ulimits = [
	I0717 19:03:49.177930  228393 command_runner.go:130] > # ]
	I0717 19:03:49.177944  228393 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0717 19:03:49.177954  228393 command_runner.go:130] > # no_pivot = false
	I0717 19:03:49.177970  228393 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0717 19:03:49.177980  228393 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0717 19:03:49.177985  228393 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0717 19:03:49.177998  228393 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0717 19:03:49.178009  228393 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0717 19:03:49.178023  228393 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 19:03:49.178033  228393 command_runner.go:130] > # conmon = ""
	I0717 19:03:49.178043  228393 command_runner.go:130] > # Cgroup setting for conmon
	I0717 19:03:49.178058  228393 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0717 19:03:49.178065  228393 command_runner.go:130] > conmon_cgroup = "pod"
	I0717 19:03:49.178072  228393 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0717 19:03:49.178084  228393 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0717 19:03:49.178099  228393 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 19:03:49.178109  228393 command_runner.go:130] > # conmon_env = [
	I0717 19:03:49.178117  228393 command_runner.go:130] > # ]
	I0717 19:03:49.178129  228393 command_runner.go:130] > # Additional environment variables to set for all the
	I0717 19:03:49.178141  228393 command_runner.go:130] > # containers. These are overridden if set in the
	I0717 19:03:49.178151  228393 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0717 19:03:49.178158  228393 command_runner.go:130] > # default_env = [
	I0717 19:03:49.178163  228393 command_runner.go:130] > # ]
	I0717 19:03:49.178177  228393 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0717 19:03:49.178187  228393 command_runner.go:130] > # selinux = false
	I0717 19:03:49.178200  228393 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0717 19:03:49.178213  228393 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0717 19:03:49.178226  228393 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0717 19:03:49.178234  228393 command_runner.go:130] > # seccomp_profile = ""
	I0717 19:03:49.178240  228393 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0717 19:03:49.178252  228393 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0717 19:03:49.178265  228393 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0717 19:03:49.178276  228393 command_runner.go:130] > # which might increase security.
	I0717 19:03:49.178287  228393 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0717 19:03:49.178302  228393 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0717 19:03:49.178315  228393 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0717 19:03:49.178325  228393 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0717 19:03:49.178337  228393 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0717 19:03:49.178349  228393 command_runner.go:130] > # This option supports live configuration reload.
	I0717 19:03:49.178361  228393 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0717 19:03:49.178373  228393 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0717 19:03:49.178383  228393 command_runner.go:130] > # the cgroup blockio controller.
	I0717 19:03:49.178393  228393 command_runner.go:130] > # blockio_config_file = ""
	I0717 19:03:49.178406  228393 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0717 19:03:49.178413  228393 command_runner.go:130] > # irqbalance daemon.
	I0717 19:03:49.178419  228393 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0717 19:03:49.178433  228393 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0717 19:03:49.178445  228393 command_runner.go:130] > # This option supports live configuration reload.
	I0717 19:03:49.178455  228393 command_runner.go:130] > # rdt_config_file = ""
	I0717 19:03:49.178467  228393 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0717 19:03:49.178477  228393 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0717 19:03:49.178490  228393 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0717 19:03:49.178497  228393 command_runner.go:130] > # separate_pull_cgroup = ""
	I0717 19:03:49.178504  228393 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0717 19:03:49.178517  228393 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0717 19:03:49.178527  228393 command_runner.go:130] > # will be added.
	I0717 19:03:49.178537  228393 command_runner.go:130] > # default_capabilities = [
	I0717 19:03:49.178547  228393 command_runner.go:130] > # 	"CHOWN",
	I0717 19:03:49.178557  228393 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0717 19:03:49.178566  228393 command_runner.go:130] > # 	"FSETID",
	I0717 19:03:49.178575  228393 command_runner.go:130] > # 	"FOWNER",
	I0717 19:03:49.178582  228393 command_runner.go:130] > # 	"SETGID",
	I0717 19:03:49.178586  228393 command_runner.go:130] > # 	"SETUID",
	I0717 19:03:49.178591  228393 command_runner.go:130] > # 	"SETPCAP",
	I0717 19:03:49.178600  228393 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0717 19:03:49.178610  228393 command_runner.go:130] > # 	"KILL",
	I0717 19:03:49.178619  228393 command_runner.go:130] > # ]
	I0717 19:03:49.178634  228393 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0717 19:03:49.178648  228393 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0717 19:03:49.178658  228393 command_runner.go:130] > # add_inheritable_capabilities = true
	I0717 19:03:49.178669  228393 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0717 19:03:49.178680  228393 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 19:03:49.178695  228393 command_runner.go:130] > # default_sysctls = [
	I0717 19:03:49.178703  228393 command_runner.go:130] > # ]
	I0717 19:03:49.178715  228393 command_runner.go:130] > # List of devices on the host that a
	I0717 19:03:49.178728  228393 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0717 19:03:49.178737  228393 command_runner.go:130] > # allowed_devices = [
	I0717 19:03:49.178746  228393 command_runner.go:130] > # 	"/dev/fuse",
	I0717 19:03:49.178754  228393 command_runner.go:130] > # ]
	I0717 19:03:49.178762  228393 command_runner.go:130] > # List of additional devices. specified as
	I0717 19:03:49.178796  228393 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0717 19:03:49.178809  228393 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0717 19:03:49.178821  228393 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 19:03:49.178831  228393 command_runner.go:130] > # additional_devices = [
	I0717 19:03:49.178839  228393 command_runner.go:130] > # ]
	I0717 19:03:49.178845  228393 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0717 19:03:49.178853  228393 command_runner.go:130] > # cdi_spec_dirs = [
	I0717 19:03:49.178863  228393 command_runner.go:130] > # 	"/etc/cdi",
	I0717 19:03:49.178873  228393 command_runner.go:130] > # 	"/var/run/cdi",
	I0717 19:03:49.178882  228393 command_runner.go:130] > # ]
	I0717 19:03:49.178895  228393 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0717 19:03:49.178908  228393 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0717 19:03:49.178918  228393 command_runner.go:130] > # Defaults to false.
	I0717 19:03:49.178927  228393 command_runner.go:130] > # device_ownership_from_security_context = false
	I0717 19:03:49.178933  228393 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0717 19:03:49.178947  228393 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0717 19:03:49.178957  228393 command_runner.go:130] > # hooks_dir = [
	I0717 19:03:49.178968  228393 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0717 19:03:49.178976  228393 command_runner.go:130] > # ]
	I0717 19:03:49.178989  228393 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0717 19:03:49.179003  228393 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0717 19:03:49.179014  228393 command_runner.go:130] > # its default mounts from the following two files:
	I0717 19:03:49.179017  228393 command_runner.go:130] > #
	I0717 19:03:49.179029  228393 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0717 19:03:49.179043  228393 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0717 19:03:49.179056  228393 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0717 19:03:49.179065  228393 command_runner.go:130] > #
	I0717 19:03:49.179078  228393 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0717 19:03:49.179091  228393 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0717 19:03:49.179102  228393 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0717 19:03:49.179107  228393 command_runner.go:130] > #      only add mounts it finds in this file.
	I0717 19:03:49.179115  228393 command_runner.go:130] > #
	I0717 19:03:49.179126  228393 command_runner.go:130] > # default_mounts_file = ""
	I0717 19:03:49.179139  228393 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0717 19:03:49.179152  228393 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0717 19:03:49.179162  228393 command_runner.go:130] > # pids_limit = 0
	I0717 19:03:49.179175  228393 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0717 19:03:49.179187  228393 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0717 19:03:49.179197  228393 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0717 19:03:49.179218  228393 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0717 19:03:49.179228  228393 command_runner.go:130] > # log_size_max = -1
	I0717 19:03:49.179243  228393 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0717 19:03:49.179253  228393 command_runner.go:130] > # log_to_journald = false
	I0717 19:03:49.179266  228393 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0717 19:03:49.179275  228393 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0717 19:03:49.179281  228393 command_runner.go:130] > # Path to directory for container attach sockets.
	I0717 19:03:49.179291  228393 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0717 19:03:49.179304  228393 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0717 19:03:49.179315  228393 command_runner.go:130] > # bind_mount_prefix = ""
	I0717 19:03:49.179327  228393 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0717 19:03:49.179337  228393 command_runner.go:130] > # read_only = false
	I0717 19:03:49.179350  228393 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0717 19:03:49.179361  228393 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0717 19:03:49.179368  228393 command_runner.go:130] > # live configuration reload.
	I0717 19:03:49.179375  228393 command_runner.go:130] > # log_level = "info"
	I0717 19:03:49.179391  228393 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0717 19:03:49.179403  228393 command_runner.go:130] > # This option supports live configuration reload.
	I0717 19:03:49.179412  228393 command_runner.go:130] > # log_filter = ""
	I0717 19:03:49.179426  228393 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0717 19:03:49.179439  228393 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0717 19:03:49.179446  228393 command_runner.go:130] > # separated by comma.
	I0717 19:03:49.179451  228393 command_runner.go:130] > # uid_mappings = ""
	I0717 19:03:49.179464  228393 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0717 19:03:49.179479  228393 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0717 19:03:49.179489  228393 command_runner.go:130] > # separated by comma.
	I0717 19:03:49.179498  228393 command_runner.go:130] > # gid_mappings = ""
	I0717 19:03:49.179511  228393 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0717 19:03:49.179524  228393 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 19:03:49.179534  228393 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 19:03:49.179539  228393 command_runner.go:130] > # minimum_mappable_uid = -1
	I0717 19:03:49.179551  228393 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0717 19:03:49.179566  228393 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 19:03:49.179579  228393 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 19:03:49.179589  228393 command_runner.go:130] > # minimum_mappable_gid = -1
	I0717 19:03:49.179602  228393 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0717 19:03:49.179614  228393 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0717 19:03:49.179626  228393 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0717 19:03:49.179635  228393 command_runner.go:130] > # ctr_stop_timeout = 30
	I0717 19:03:49.179649  228393 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0717 19:03:49.179665  228393 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0717 19:03:49.179676  228393 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0717 19:03:49.179691  228393 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0717 19:03:49.179701  228393 command_runner.go:130] > # drop_infra_ctr = true
	I0717 19:03:49.179711  228393 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0717 19:03:49.179719  228393 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0717 19:03:49.179735  228393 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0717 19:03:49.179746  228393 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0717 19:03:49.179758  228393 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0717 19:03:49.179770  228393 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0717 19:03:49.179780  228393 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0717 19:03:49.179793  228393 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0717 19:03:49.179799  228393 command_runner.go:130] > # pinns_path = ""
	I0717 19:03:49.179809  228393 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0717 19:03:49.179823  228393 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0717 19:03:49.179837  228393 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0717 19:03:49.179847  228393 command_runner.go:130] > # default_runtime = "runc"
	I0717 19:03:49.179858  228393 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0717 19:03:49.179873  228393 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0717 19:03:49.179886  228393 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0717 19:03:49.179897  228393 command_runner.go:130] > # creation as a file is not desired either.
	I0717 19:03:49.179915  228393 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0717 19:03:49.179927  228393 command_runner.go:130] > # the hostname is being managed dynamically.
	I0717 19:03:49.179938  228393 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0717 19:03:49.179946  228393 command_runner.go:130] > # ]
	I0717 19:03:49.179956  228393 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0717 19:03:49.179966  228393 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0717 19:03:49.179990  228393 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0717 19:03:49.180005  228393 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0717 19:03:49.180014  228393 command_runner.go:130] > #
	I0717 19:03:49.180023  228393 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0717 19:03:49.180034  228393 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0717 19:03:49.180044  228393 command_runner.go:130] > #  runtime_type = "oci"
	I0717 19:03:49.180054  228393 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0717 19:03:49.180061  228393 command_runner.go:130] > #  privileged_without_host_devices = false
	I0717 19:03:49.180068  228393 command_runner.go:130] > #  allowed_annotations = []
	I0717 19:03:49.180077  228393 command_runner.go:130] > # Where:
	I0717 19:03:49.180087  228393 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0717 19:03:49.180100  228393 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0717 19:03:49.180114  228393 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0717 19:03:49.180127  228393 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0717 19:03:49.180136  228393 command_runner.go:130] > #   in $PATH.
	I0717 19:03:49.180146  228393 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0717 19:03:49.180156  228393 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0717 19:03:49.180170  228393 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0717 19:03:49.180180  228393 command_runner.go:130] > #   state.
	I0717 19:03:49.180193  228393 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0717 19:03:49.180206  228393 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0717 19:03:49.180219  228393 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0717 19:03:49.180230  228393 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0717 19:03:49.180240  228393 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0717 19:03:49.180257  228393 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0717 19:03:49.180269  228393 command_runner.go:130] > #   The currently recognized values are:
	I0717 19:03:49.180283  228393 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0717 19:03:49.180298  228393 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0717 19:03:49.180311  228393 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0717 19:03:49.180321  228393 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0717 19:03:49.180336  228393 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0717 19:03:49.180351  228393 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0717 19:03:49.180365  228393 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0717 19:03:49.180378  228393 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0717 19:03:49.180390  228393 command_runner.go:130] > #   should be moved to the container's cgroup
	I0717 19:03:49.180399  228393 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0717 19:03:49.180405  228393 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0717 19:03:49.180415  228393 command_runner.go:130] > runtime_type = "oci"
	I0717 19:03:49.180425  228393 command_runner.go:130] > runtime_root = "/run/runc"
	I0717 19:03:49.180435  228393 command_runner.go:130] > runtime_config_path = ""
	I0717 19:03:49.180445  228393 command_runner.go:130] > monitor_path = ""
	I0717 19:03:49.180455  228393 command_runner.go:130] > monitor_cgroup = ""
	I0717 19:03:49.180465  228393 command_runner.go:130] > monitor_exec_cgroup = ""
	I0717 19:03:49.180504  228393 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0717 19:03:49.180515  228393 command_runner.go:130] > # running containers
	I0717 19:03:49.180523  228393 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0717 19:03:49.180537  228393 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0717 19:03:49.180552  228393 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0717 19:03:49.180565  228393 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0717 19:03:49.180576  228393 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0717 19:03:49.180583  228393 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0717 19:03:49.180590  228393 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0717 19:03:49.180600  228393 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0717 19:03:49.180612  228393 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0717 19:03:49.180623  228393 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0717 19:03:49.180636  228393 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0717 19:03:49.180648  228393 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0717 19:03:49.180664  228393 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0717 19:03:49.180676  228393 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0717 19:03:49.180698  228393 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0717 19:03:49.180711  228393 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0717 19:03:49.180728  228393 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0717 19:03:49.180746  228393 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0717 19:03:49.180755  228393 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0717 19:03:49.180770  228393 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0717 19:03:49.180780  228393 command_runner.go:130] > # Example:
	I0717 19:03:49.180791  228393 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0717 19:03:49.180800  228393 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0717 19:03:49.180810  228393 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0717 19:03:49.180822  228393 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0717 19:03:49.180831  228393 command_runner.go:130] > # cpuset = 0
	I0717 19:03:49.180839  228393 command_runner.go:130] > # cpushares = "0-1"
	I0717 19:03:49.180845  228393 command_runner.go:130] > # Where:
	I0717 19:03:49.180853  228393 command_runner.go:130] > # The workload name is workload-type.
	I0717 19:03:49.180869  228393 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0717 19:03:49.180881  228393 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0717 19:03:49.180894  228393 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0717 19:03:49.180910  228393 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0717 19:03:49.180924  228393 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0717 19:03:49.180930  228393 command_runner.go:130] > # 
	I0717 19:03:49.180941  228393 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0717 19:03:49.180952  228393 command_runner.go:130] > #
	I0717 19:03:49.180965  228393 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0717 19:03:49.180978  228393 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0717 19:03:49.180992  228393 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0717 19:03:49.181005  228393 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0717 19:03:49.181015  228393 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0717 19:03:49.181023  228393 command_runner.go:130] > [crio.image]
	I0717 19:03:49.181037  228393 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0717 19:03:49.181048  228393 command_runner.go:130] > # default_transport = "docker://"
	I0717 19:03:49.181061  228393 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0717 19:03:49.181074  228393 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0717 19:03:49.181084  228393 command_runner.go:130] > # global_auth_file = ""
	I0717 19:03:49.181095  228393 command_runner.go:130] > # The image used to instantiate infra containers.
	I0717 19:03:49.181103  228393 command_runner.go:130] > # This option supports live configuration reload.
	I0717 19:03:49.181113  228393 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0717 19:03:49.181128  228393 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0717 19:03:49.181142  228393 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0717 19:03:49.181153  228393 command_runner.go:130] > # This option supports live configuration reload.
	I0717 19:03:49.181163  228393 command_runner.go:130] > # pause_image_auth_file = ""
	I0717 19:03:49.181177  228393 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0717 19:03:49.181188  228393 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0717 19:03:49.181199  228393 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0717 19:03:49.181213  228393 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0717 19:03:49.181224  228393 command_runner.go:130] > # pause_command = "/pause"
	I0717 19:03:49.181236  228393 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0717 19:03:49.181250  228393 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0717 19:03:49.181263  228393 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0717 19:03:49.181274  228393 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0717 19:03:49.181281  228393 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0717 19:03:49.181287  228393 command_runner.go:130] > # signature_policy = ""
	I0717 19:03:49.181299  228393 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0717 19:03:49.181311  228393 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0717 19:03:49.181322  228393 command_runner.go:130] > # changing them here.
	I0717 19:03:49.181336  228393 command_runner.go:130] > # insecure_registries = [
	I0717 19:03:49.181345  228393 command_runner.go:130] > # ]
	I0717 19:03:49.181358  228393 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0717 19:03:49.181370  228393 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0717 19:03:49.181379  228393 command_runner.go:130] > # image_volumes = "mkdir"
	I0717 19:03:49.181387  228393 command_runner.go:130] > # Temporary directory to use for storing big files
	I0717 19:03:49.181392  228393 command_runner.go:130] > # big_files_temporary_dir = ""
	I0717 19:03:49.181400  228393 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0717 19:03:49.181406  228393 command_runner.go:130] > # CNI plugins.
	I0717 19:03:49.181409  228393 command_runner.go:130] > [crio.network]
	I0717 19:03:49.181417  228393 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0717 19:03:49.181422  228393 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0717 19:03:49.181429  228393 command_runner.go:130] > # cni_default_network = ""
	I0717 19:03:49.181435  228393 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0717 19:03:49.181441  228393 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0717 19:03:49.181447  228393 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0717 19:03:49.181456  228393 command_runner.go:130] > # plugin_dirs = [
	I0717 19:03:49.181466  228393 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0717 19:03:49.181475  228393 command_runner.go:130] > # ]
	I0717 19:03:49.181488  228393 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0717 19:03:49.181497  228393 command_runner.go:130] > [crio.metrics]
	I0717 19:03:49.181509  228393 command_runner.go:130] > # Globally enable or disable metrics support.
	I0717 19:03:49.181520  228393 command_runner.go:130] > # enable_metrics = false
	I0717 19:03:49.181529  228393 command_runner.go:130] > # Specify enabled metrics collectors.
	I0717 19:03:49.181536  228393 command_runner.go:130] > # Per default all metrics are enabled.
	I0717 19:03:49.181542  228393 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0717 19:03:49.181550  228393 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0717 19:03:49.181557  228393 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0717 19:03:49.181564  228393 command_runner.go:130] > # metrics_collectors = [
	I0717 19:03:49.181567  228393 command_runner.go:130] > # 	"operations",
	I0717 19:03:49.181574  228393 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0717 19:03:49.181579  228393 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0717 19:03:49.181585  228393 command_runner.go:130] > # 	"operations_errors",
	I0717 19:03:49.181589  228393 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0717 19:03:49.181595  228393 command_runner.go:130] > # 	"image_pulls_by_name",
	I0717 19:03:49.181600  228393 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0717 19:03:49.181606  228393 command_runner.go:130] > # 	"image_pulls_failures",
	I0717 19:03:49.181611  228393 command_runner.go:130] > # 	"image_pulls_successes",
	I0717 19:03:49.181617  228393 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0717 19:03:49.181622  228393 command_runner.go:130] > # 	"image_layer_reuse",
	I0717 19:03:49.181628  228393 command_runner.go:130] > # 	"containers_oom_total",
	I0717 19:03:49.181632  228393 command_runner.go:130] > # 	"containers_oom",
	I0717 19:03:49.181637  228393 command_runner.go:130] > # 	"processes_defunct",
	I0717 19:03:49.181644  228393 command_runner.go:130] > # 	"operations_total",
	I0717 19:03:49.181648  228393 command_runner.go:130] > # 	"operations_latency_seconds",
	I0717 19:03:49.181662  228393 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0717 19:03:49.181669  228393 command_runner.go:130] > # 	"operations_errors_total",
	I0717 19:03:49.181673  228393 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0717 19:03:49.181689  228393 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0717 19:03:49.181700  228393 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0717 19:03:49.181707  228393 command_runner.go:130] > # 	"image_pulls_success_total",
	I0717 19:03:49.181712  228393 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0717 19:03:49.181718  228393 command_runner.go:130] > # 	"containers_oom_count_total",
	I0717 19:03:49.181722  228393 command_runner.go:130] > # ]
	I0717 19:03:49.181729  228393 command_runner.go:130] > # The port on which the metrics server will listen.
	I0717 19:03:49.181733  228393 command_runner.go:130] > # metrics_port = 9090
	I0717 19:03:49.181740  228393 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0717 19:03:49.181745  228393 command_runner.go:130] > # metrics_socket = ""
	I0717 19:03:49.181753  228393 command_runner.go:130] > # The certificate for the secure metrics server.
	I0717 19:03:49.181762  228393 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0717 19:03:49.181770  228393 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0717 19:03:49.181777  228393 command_runner.go:130] > # certificate on any modification event.
	I0717 19:03:49.181781  228393 command_runner.go:130] > # metrics_cert = ""
	I0717 19:03:49.181788  228393 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0717 19:03:49.181793  228393 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0717 19:03:49.181799  228393 command_runner.go:130] > # metrics_key = ""
	I0717 19:03:49.181804  228393 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0717 19:03:49.181810  228393 command_runner.go:130] > [crio.tracing]
	I0717 19:03:49.181816  228393 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0717 19:03:49.181822  228393 command_runner.go:130] > # enable_tracing = false
	I0717 19:03:49.181828  228393 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0717 19:03:49.181835  228393 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0717 19:03:49.181840  228393 command_runner.go:130] > # Number of samples to collect per million spans.
	I0717 19:03:49.181848  228393 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0717 19:03:49.181854  228393 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0717 19:03:49.181860  228393 command_runner.go:130] > [crio.stats]
	I0717 19:03:49.181865  228393 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0717 19:03:49.181873  228393 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0717 19:03:49.181878  228393 command_runner.go:130] > # stats_collection_period = 0
	I0717 19:03:49.181946  228393 cni.go:84] Creating CNI manager for ""
	I0717 19:03:49.181959  228393 cni.go:137] 1 nodes found, recommending kindnet
	I0717 19:03:49.181975  228393 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 19:03:49.181995  228393 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-549411 NodeName:multinode-549411 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:03:49.182158  228393 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-549411"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:03:49.182260  228393 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-549411 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:multinode-549411 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 19:03:49.182322  228393 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 19:03:49.189973  228393 command_runner.go:130] > kubeadm
	I0717 19:03:49.189995  228393 command_runner.go:130] > kubectl
	I0717 19:03:49.190000  228393 command_runner.go:130] > kubelet
	I0717 19:03:49.190587  228393 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:03:49.190657  228393 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:03:49.198791  228393 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I0717 19:03:49.214954  228393 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:03:49.231011  228393 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0717 19:03:49.247028  228393 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0717 19:03:49.250512  228393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:03:49.260819  228393 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411 for IP: 192.168.58.2
	I0717 19:03:49.260866  228393 certs.go:190] acquiring lock for shared ca certs: {Name:mk42196ce59710ebf500640671660e2f4656c84e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:03:49.261030  228393 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16890-138069/.minikube/ca.key
	I0717 19:03:49.261084  228393 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16890-138069/.minikube/proxy-client-ca.key
	I0717 19:03:49.261139  228393 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/client.key
	I0717 19:03:49.261159  228393 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/client.crt with IP's: []
	I0717 19:03:49.506077  228393 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/client.crt ...
	I0717 19:03:49.506113  228393 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/client.crt: {Name:mk5eaf4f17895cb5216cb6ae4c3a36e502fb757b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:03:49.506291  228393 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/client.key ...
	I0717 19:03:49.506303  228393 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/client.key: {Name:mk975c010a441943ac86e9b8fa03cbcdff9ff2f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:03:49.506377  228393 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/apiserver.key.cee25041
	I0717 19:03:49.506392  228393 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0717 19:03:49.662250  228393 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/apiserver.crt.cee25041 ...
	I0717 19:03:49.662287  228393 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/apiserver.crt.cee25041: {Name:mk0367a5ccf781c9d74e72a50ae9f9ea2ba7cd69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:03:49.662487  228393 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/apiserver.key.cee25041 ...
	I0717 19:03:49.662502  228393 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/apiserver.key.cee25041: {Name:mk653da6e450bc87054b87a291576691de5f8c57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:03:49.662599  228393 certs.go:337] copying /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/apiserver.crt
	I0717 19:03:49.662672  228393 certs.go:341] copying /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/apiserver.key
	I0717 19:03:49.662719  228393 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/proxy-client.key
	I0717 19:03:49.662732  228393 crypto.go:68] Generating cert /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/proxy-client.crt with IP's: []
	I0717 19:03:49.838081  228393 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/proxy-client.crt ...
	I0717 19:03:49.838114  228393 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/proxy-client.crt: {Name:mk8b47003bf380c9d8e5ea83ba1bfae55de0b225 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:03:49.838318  228393 crypto.go:164] Writing key to /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/proxy-client.key ...
	I0717 19:03:49.838335  228393 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/proxy-client.key: {Name:mke04c59bab5ab76d824382efe71d052981d45ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:03:49.838431  228393 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 19:03:49.838454  228393 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 19:03:49.838464  228393 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 19:03:49.838477  228393 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 19:03:49.838487  228393 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 19:03:49.838500  228393 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 19:03:49.838512  228393 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 19:03:49.838522  228393 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 19:03:49.838581  228393 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/144822.pem (1338 bytes)
	W0717 19:03:49.838620  228393 certs.go:433] ignoring /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/144822_empty.pem, impossibly tiny 0 bytes
	I0717 19:03:49.838630  228393 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:03:49.838650  228393 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem (1078 bytes)
	I0717 19:03:49.838670  228393 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:03:49.838699  228393 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/key.pem (1675 bytes)
	I0717 19:03:49.838736  228393 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem (1708 bytes)
	I0717 19:03:49.838760  228393 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem -> /usr/share/ca-certificates/1448222.pem
	I0717 19:03:49.838776  228393 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:03:49.838788  228393 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/144822.pem -> /usr/share/ca-certificates/144822.pem
	I0717 19:03:49.839294  228393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 19:03:49.861521  228393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 19:03:49.882754  228393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:03:49.903450  228393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 19:03:49.924295  228393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:03:49.944634  228393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 19:03:49.965026  228393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:03:49.985531  228393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 19:03:50.005931  228393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem --> /usr/share/ca-certificates/1448222.pem (1708 bytes)
	I0717 19:03:50.027098  228393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:03:50.047954  228393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/certs/144822.pem --> /usr/share/ca-certificates/144822.pem (1338 bytes)
	I0717 19:03:50.069104  228393 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:03:50.084991  228393 ssh_runner.go:195] Run: openssl version
	I0717 19:03:50.089908  228393 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0717 19:03:50.090089  228393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1448222.pem && ln -fs /usr/share/ca-certificates/1448222.pem /etc/ssl/certs/1448222.pem"
	I0717 19:03:50.099238  228393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1448222.pem
	I0717 19:03:50.102525  228393 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 17 18:51 /usr/share/ca-certificates/1448222.pem
	I0717 19:03:50.102564  228393 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:51 /usr/share/ca-certificates/1448222.pem
	I0717 19:03:50.102607  228393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1448222.pem
	I0717 19:03:50.108892  228393 command_runner.go:130] > 3ec20f2e
	I0717 19:03:50.109157  228393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1448222.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:03:50.118111  228393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:03:50.127238  228393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:03:50.130462  228393 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 17 18:46 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:03:50.130503  228393 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:46 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:03:50.130543  228393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:03:50.136978  228393 command_runner.go:130] > b5213941
	I0717 19:03:50.137168  228393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:03:50.146140  228393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144822.pem && ln -fs /usr/share/ca-certificates/144822.pem /etc/ssl/certs/144822.pem"
	I0717 19:03:50.154737  228393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144822.pem
	I0717 19:03:50.158008  228393 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 17 18:51 /usr/share/ca-certificates/144822.pem
	I0717 19:03:50.158045  228393 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:51 /usr/share/ca-certificates/144822.pem
	I0717 19:03:50.158085  228393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144822.pem
	I0717 19:03:50.164342  228393 command_runner.go:130] > 51391683
	I0717 19:03:50.164527  228393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144822.pem /etc/ssl/certs/51391683.0"
	I0717 19:03:50.173134  228393 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 19:03:50.176234  228393 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 19:03:50.176262  228393 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 19:03:50.176297  228393 kubeadm.go:404] StartCluster: {Name:multinode-549411 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-549411 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:03:50.176384  228393 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:03:50.176437  228393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:03:50.210082  228393 cri.go:89] found id: ""
	I0717 19:03:50.210163  228393 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:03:50.218666  228393 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0717 19:03:50.218690  228393 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0717 19:03:50.218696  228393 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0717 19:03:50.218772  228393 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:03:50.227011  228393 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0717 19:03:50.227117  228393 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:03:50.235050  228393 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0717 19:03:50.235072  228393 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0717 19:03:50.235078  228393 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0717 19:03:50.235087  228393 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:03:50.235116  228393 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 19:03:50.235153  228393 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 19:03:50.281304  228393 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0717 19:03:50.281336  228393 command_runner.go:130] > [init] Using Kubernetes version: v1.27.3
	I0717 19:03:50.281370  228393 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 19:03:50.281411  228393 command_runner.go:130] > [preflight] Running pre-flight checks
	I0717 19:03:50.317900  228393 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0717 19:03:50.317943  228393 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0717 19:03:50.318008  228393 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1037-gcp
	I0717 19:03:50.318020  228393 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1037-gcp
	I0717 19:03:50.318068  228393 kubeadm.go:322] OS: Linux
	I0717 19:03:50.318087  228393 command_runner.go:130] > OS: Linux
	I0717 19:03:50.318165  228393 kubeadm.go:322] CGROUPS_CPU: enabled
	I0717 19:03:50.318184  228393 command_runner.go:130] > CGROUPS_CPU: enabled
	I0717 19:03:50.318231  228393 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0717 19:03:50.318239  228393 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0717 19:03:50.318295  228393 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0717 19:03:50.318306  228393 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0717 19:03:50.318375  228393 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0717 19:03:50.318385  228393 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0717 19:03:50.318512  228393 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0717 19:03:50.318536  228393 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0717 19:03:50.318594  228393 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0717 19:03:50.318619  228393 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0717 19:03:50.318674  228393 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0717 19:03:50.318681  228393 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0717 19:03:50.318742  228393 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0717 19:03:50.318752  228393 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0717 19:03:50.318826  228393 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0717 19:03:50.318841  228393 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0717 19:03:50.381409  228393 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 19:03:50.381463  228393 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 19:03:50.381585  228393 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 19:03:50.381596  228393 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 19:03:50.381709  228393 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 19:03:50.381738  228393 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 19:03:50.581207  228393 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 19:03:50.585913  228393 out.go:204]   - Generating certificates and keys ...
	I0717 19:03:50.581328  228393 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 19:03:50.586098  228393 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 19:03:50.586135  228393 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0717 19:03:50.586226  228393 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 19:03:50.586237  228393 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0717 19:03:50.707221  228393 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 19:03:50.707254  228393 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 19:03:50.859998  228393 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0717 19:03:50.860029  228393 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0717 19:03:51.053568  228393 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0717 19:03:51.053603  228393 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0717 19:03:51.167046  228393 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0717 19:03:51.167085  228393 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0717 19:03:51.461075  228393 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0717 19:03:51.461108  228393 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0717 19:03:51.461239  228393 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-549411] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0717 19:03:51.461237  228393 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-549411] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0717 19:03:51.808364  228393 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0717 19:03:51.808395  228393 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0717 19:03:51.808542  228393 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-549411] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0717 19:03:51.808555  228393 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-549411] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0717 19:03:51.973022  228393 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 19:03:51.973053  228393 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 19:03:52.133603  228393 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 19:03:52.133654  228393 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 19:03:52.272967  228393 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0717 19:03:52.273004  228393 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0717 19:03:52.273103  228393 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 19:03:52.273118  228393 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 19:03:52.458998  228393 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 19:03:52.459037  228393 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 19:03:52.553002  228393 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 19:03:52.553049  228393 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 19:03:53.044993  228393 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 19:03:53.045042  228393 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 19:03:53.104209  228393 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 19:03:53.104246  228393 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 19:03:53.112192  228393 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:03:53.112216  228393 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:03:53.113110  228393 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:03:53.113140  228393 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:03:53.113191  228393 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 19:03:53.113205  228393 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0717 19:03:53.184321  228393 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 19:03:53.184346  228393 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 19:03:53.187011  228393 out.go:204]   - Booting up control plane ...
	I0717 19:03:53.187146  228393 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 19:03:53.187163  228393 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 19:03:53.188160  228393 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 19:03:53.188191  228393 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 19:03:53.189161  228393 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 19:03:53.189187  228393 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 19:03:53.189860  228393 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 19:03:53.189878  228393 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 19:03:53.192632  228393 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 19:03:53.192652  228393 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 19:03:58.195174  228393 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002491 seconds
	I0717 19:03:58.195211  228393 command_runner.go:130] > [apiclient] All control plane components are healthy after 5.002491 seconds
	I0717 19:03:58.195336  228393 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 19:03:58.195347  228393 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 19:03:58.208894  228393 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 19:03:58.208929  228393 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 19:03:58.731109  228393 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 19:03:58.731154  228393 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0717 19:03:58.731337  228393 kubeadm.go:322] [mark-control-plane] Marking the node multinode-549411 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 19:03:58.731366  228393 command_runner.go:130] > [mark-control-plane] Marking the node multinode-549411 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 19:03:59.241972  228393 kubeadm.go:322] [bootstrap-token] Using token: cequ5o.b05gvaatj3edvugy
	I0717 19:03:59.243997  228393 out.go:204]   - Configuring RBAC rules ...
	I0717 19:03:59.242050  228393 command_runner.go:130] > [bootstrap-token] Using token: cequ5o.b05gvaatj3edvugy
	I0717 19:03:59.244176  228393 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 19:03:59.244197  228393 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 19:03:59.247797  228393 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 19:03:59.247831  228393 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 19:03:59.254410  228393 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 19:03:59.254426  228393 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 19:03:59.257153  228393 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 19:03:59.257173  228393 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 19:03:59.259737  228393 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 19:03:59.259751  228393 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 19:03:59.263702  228393 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 19:03:59.263719  228393 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 19:03:59.273471  228393 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 19:03:59.273506  228393 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 19:03:59.484347  228393 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 19:03:59.484376  228393 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0717 19:03:59.670241  228393 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 19:03:59.670272  228393 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0717 19:03:59.671465  228393 kubeadm.go:322] 
	I0717 19:03:59.671567  228393 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 19:03:59.671588  228393 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0717 19:03:59.671599  228393 kubeadm.go:322] 
	I0717 19:03:59.671673  228393 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 19:03:59.671683  228393 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0717 19:03:59.671689  228393 kubeadm.go:322] 
	I0717 19:03:59.671724  228393 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 19:03:59.671734  228393 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0717 19:03:59.671797  228393 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 19:03:59.671808  228393 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 19:03:59.671869  228393 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 19:03:59.671877  228393 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 19:03:59.671881  228393 kubeadm.go:322] 
	I0717 19:03:59.671952  228393 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0717 19:03:59.671961  228393 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0717 19:03:59.671965  228393 kubeadm.go:322] 
	I0717 19:03:59.672047  228393 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 19:03:59.672058  228393 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 19:03:59.672062  228393 kubeadm.go:322] 
	I0717 19:03:59.672138  228393 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 19:03:59.672147  228393 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0717 19:03:59.672252  228393 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 19:03:59.672263  228393 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 19:03:59.672349  228393 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 19:03:59.672360  228393 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 19:03:59.672365  228393 kubeadm.go:322] 
	I0717 19:03:59.672444  228393 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 19:03:59.672450  228393 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0717 19:03:59.672519  228393 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 19:03:59.672525  228393 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0717 19:03:59.672528  228393 kubeadm.go:322] 
	I0717 19:03:59.672609  228393 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token cequ5o.b05gvaatj3edvugy \
	I0717 19:03:59.672616  228393 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token cequ5o.b05gvaatj3edvugy \
	I0717 19:03:59.672697  228393 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:937c4239101ec8b12459e4fa3de0759350fbf81fa4f52752b966f06f42d7d7ec \
	I0717 19:03:59.672703  228393 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:937c4239101ec8b12459e4fa3de0759350fbf81fa4f52752b966f06f42d7d7ec \
	I0717 19:03:59.672719  228393 kubeadm.go:322] 	--control-plane 
	I0717 19:03:59.672725  228393 command_runner.go:130] > 	--control-plane 
	I0717 19:03:59.672728  228393 kubeadm.go:322] 
	I0717 19:03:59.672814  228393 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 19:03:59.672824  228393 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0717 19:03:59.672828  228393 kubeadm.go:322] 
	I0717 19:03:59.672913  228393 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token cequ5o.b05gvaatj3edvugy \
	I0717 19:03:59.672923  228393 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token cequ5o.b05gvaatj3edvugy \
	I0717 19:03:59.673021  228393 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:937c4239101ec8b12459e4fa3de0759350fbf81fa4f52752b966f06f42d7d7ec 
	I0717 19:03:59.673028  228393 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:937c4239101ec8b12459e4fa3de0759350fbf81fa4f52752b966f06f42d7d7ec 
	I0717 19:03:59.675409  228393 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1037-gcp\n", err: exit status 1
	I0717 19:03:59.675430  228393 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1037-gcp\n", err: exit status 1
	I0717 19:03:59.675515  228393 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 19:03:59.675527  228393 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 19:03:59.675554  228393 cni.go:84] Creating CNI manager for ""
	I0717 19:03:59.675571  228393 cni.go:137] 1 nodes found, recommending kindnet
	I0717 19:03:59.677627  228393 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 19:03:59.679165  228393 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 19:03:59.683591  228393 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0717 19:03:59.683618  228393 command_runner.go:130] >   Size: 3955775   	Blocks: 7736       IO Block: 4096   regular file
	I0717 19:03:59.683629  228393 command_runner.go:130] > Device: 37h/55d	Inode: 565096      Links: 1
	I0717 19:03:59.683640  228393 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 19:03:59.683649  228393 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I0717 19:03:59.683658  228393 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I0717 19:03:59.683663  228393 command_runner.go:130] > Change: 2023-07-17 18:45:36.078726379 +0000
	I0717 19:03:59.683670  228393 command_runner.go:130] >  Birth: 2023-07-17 18:45:36.054724642 +0000
	I0717 19:03:59.683729  228393 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0717 19:03:59.683744  228393 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 19:03:59.701175  228393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 19:04:00.343732  228393 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0717 19:04:00.349102  228393 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0717 19:04:00.358506  228393 command_runner.go:130] > serviceaccount/kindnet created
	I0717 19:04:00.368069  228393 command_runner.go:130] > daemonset.apps/kindnet created
	I0717 19:04:00.372196  228393 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 19:04:00.372291  228393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:04:00.372317  228393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5 minikube.k8s.io/name=multinode-549411 minikube.k8s.io/updated_at=2023_07_17T19_04_00_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:04:00.379377  228393 command_runner.go:130] > -16
	I0717 19:04:00.379409  228393 ops.go:34] apiserver oom_adj: -16
	I0717 19:04:00.475126  228393 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0717 19:04:00.475219  228393 command_runner.go:130] > node/multinode-549411 labeled
	I0717 19:04:00.475278  228393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:04:00.539793  228393 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:04:01.040625  228393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:04:01.101160  228393 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:04:01.540139  228393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:04:01.605370  228393 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:04:02.040978  228393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:04:02.104359  228393 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:04:02.540148  228393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:04:02.601006  228393 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:04:03.040005  228393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:04:03.104356  228393 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:04:03.541015  228393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:04:03.607639  228393 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:04:04.040236  228393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:04:04.103963  228393 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:04:04.540660  228393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:04:04.605964  228393 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:04:05.040588  228393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:04:05.103055  228393 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:04:05.540715  228393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:04:05.606350  228393 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:04:06.040954  228393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:04:06.102627  228393 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:04:06.540877  228393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:04:06.602361  228393 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:04:07.040654  228393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:04:07.105056  228393 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:04:07.540678  228393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:04:07.604823  228393 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:04:08.040402  228393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:04:08.105014  228393 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:04:08.540617  228393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:04:08.603440  228393 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:04:09.040330  228393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:04:09.103200  228393 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:04:09.540115  228393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:04:09.605904  228393 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:04:10.040585  228393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:04:10.104058  228393 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:04:10.540534  228393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:04:10.603530  228393 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:04:11.040034  228393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:04:11.107439  228393 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:04:11.540697  228393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:04:11.603395  228393 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:04:12.040732  228393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:04:12.104750  228393 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0717 19:04:12.541000  228393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 19:04:12.606472  228393 command_runner.go:130] > NAME      SECRETS   AGE
	I0717 19:04:12.606494  228393 command_runner.go:130] > default   0         0s
	I0717 19:04:12.608912  228393 kubeadm.go:1081] duration metric: took 12.236687651s to wait for elevateKubeSystemPrivileges.
	I0717 19:04:12.608942  228393 kubeadm.go:406] StartCluster complete in 22.432647977s
	I0717 19:04:12.608960  228393 settings.go:142] acquiring lock: {Name:mk9765434b8f4871dd605367f6caa71617d51b6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:04:12.609031  228393 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16890-138069/kubeconfig
	I0717 19:04:12.609667  228393 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-138069/kubeconfig: {Name:mkc53c034e0e90a78d013670a58d5882070a3e3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:04:12.609920  228393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 19:04:12.610027  228393 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 19:04:12.610125  228393 addons.go:69] Setting storage-provisioner=true in profile "multinode-549411"
	I0717 19:04:12.610146  228393 addons.go:231] Setting addon storage-provisioner=true in "multinode-549411"
	I0717 19:04:12.610151  228393 addons.go:69] Setting default-storageclass=true in profile "multinode-549411"
	I0717 19:04:12.610182  228393 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-549411"
	I0717 19:04:12.610222  228393 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16890-138069/kubeconfig
	I0717 19:04:12.610157  228393 config.go:182] Loaded profile config "multinode-549411": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:04:12.610226  228393 host.go:66] Checking if "multinode-549411" exists ...
	I0717 19:04:12.610531  228393 kapi.go:59] client config for multinode-549411: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/client.key", CAFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 19:04:12.610601  228393 cli_runner.go:164] Run: docker container inspect multinode-549411 --format={{.State.Status}}
	I0717 19:04:12.610809  228393 cli_runner.go:164] Run: docker container inspect multinode-549411 --format={{.State.Status}}
	I0717 19:04:12.611464  228393 cert_rotation.go:137] Starting client certificate rotation controller
	I0717 19:04:12.611742  228393 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0717 19:04:12.611757  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:12.611769  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:12.611781  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:12.622664  228393 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0717 19:04:12.622695  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:12.622708  228393 round_trippers.go:580]     Content-Length: 291
	I0717 19:04:12.622716  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:12 GMT
	I0717 19:04:12.622724  228393 round_trippers.go:580]     Audit-Id: 782ddb69-45ca-4ad8-8c43-1b0acaf18952
	I0717 19:04:12.622734  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:12.622747  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:12.622761  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:12.622773  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:12.622809  228393 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0a0aff40-ae4c-45a2-85d1-4b9fe202ee82","resourceVersion":"261","creationTimestamp":"2023-07-17T19:03:59Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0717 19:04:12.623340  228393 request.go:1188] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0a0aff40-ae4c-45a2-85d1-4b9fe202ee82","resourceVersion":"261","creationTimestamp":"2023-07-17T19:03:59Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0717 19:04:12.623418  228393 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0717 19:04:12.623430  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:12.623442  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:12.623456  228393 round_trippers.go:473]     Content-Type: application/json
	I0717 19:04:12.623468  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:12.630993  228393 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0717 19:04:12.631023  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:12.631035  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:12 GMT
	I0717 19:04:12.631044  228393 round_trippers.go:580]     Audit-Id: 4b281d5f-485b-4219-aa9d-e74beb8e19d8
	I0717 19:04:12.631058  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:12.631070  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:12.631081  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:12.631093  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:12.631105  228393 round_trippers.go:580]     Content-Length: 291
	I0717 19:04:12.631140  228393 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0a0aff40-ae4c-45a2-85d1-4b9fe202ee82","resourceVersion":"359","creationTimestamp":"2023-07-17T19:03:59Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0717 19:04:12.631311  228393 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16890-138069/kubeconfig
	I0717 19:04:12.631545  228393 kapi.go:59] client config for multinode-549411: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/client.key", CAFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 19:04:12.631868  228393 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0717 19:04:12.631887  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:12.631897  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:12.631908  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:12.636030  228393 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:04:12.633914  228393 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 19:04:12.638056  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:12.638070  228393 round_trippers.go:580]     Content-Length: 109
	I0717 19:04:12.638080  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:12 GMT
	I0717 19:04:12.638092  228393 round_trippers.go:580]     Audit-Id: b7e300fc-033a-4b0a-bfc2-38504dfdaacb
	I0717 19:04:12.638101  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:12.638112  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:12.638121  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:12.638133  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:12.638195  228393 request.go:1188] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"359"},"items":[]}
	I0717 19:04:12.638208  228393 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:04:12.638220  228393 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 19:04:12.638289  228393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-549411
	I0717 19:04:12.638557  228393 addons.go:231] Setting addon default-storageclass=true in "multinode-549411"
	I0717 19:04:12.638602  228393 host.go:66] Checking if "multinode-549411" exists ...
	I0717 19:04:12.639104  228393 cli_runner.go:164] Run: docker container inspect multinode-549411 --format={{.State.Status}}
	I0717 19:04:12.658113  228393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/multinode-549411/id_rsa Username:docker}
	I0717 19:04:12.658921  228393 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 19:04:12.658944  228393 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 19:04:12.658992  228393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-549411
	I0717 19:04:12.681746  228393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/multinode-549411/id_rsa Username:docker}
	I0717 19:04:12.697059  228393 command_runner.go:130] > apiVersion: v1
	I0717 19:04:12.697083  228393 command_runner.go:130] > data:
	I0717 19:04:12.697087  228393 command_runner.go:130] >   Corefile: |
	I0717 19:04:12.697091  228393 command_runner.go:130] >     .:53 {
	I0717 19:04:12.697095  228393 command_runner.go:130] >         errors
	I0717 19:04:12.697100  228393 command_runner.go:130] >         health {
	I0717 19:04:12.697104  228393 command_runner.go:130] >            lameduck 5s
	I0717 19:04:12.697108  228393 command_runner.go:130] >         }
	I0717 19:04:12.697112  228393 command_runner.go:130] >         ready
	I0717 19:04:12.697122  228393 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0717 19:04:12.697130  228393 command_runner.go:130] >            pods insecure
	I0717 19:04:12.697137  228393 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0717 19:04:12.697148  228393 command_runner.go:130] >            ttl 30
	I0717 19:04:12.697159  228393 command_runner.go:130] >         }
	I0717 19:04:12.697165  228393 command_runner.go:130] >         prometheus :9153
	I0717 19:04:12.697178  228393 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0717 19:04:12.697188  228393 command_runner.go:130] >            max_concurrent 1000
	I0717 19:04:12.697194  228393 command_runner.go:130] >         }
	I0717 19:04:12.697203  228393 command_runner.go:130] >         cache 30
	I0717 19:04:12.697209  228393 command_runner.go:130] >         loop
	I0717 19:04:12.697218  228393 command_runner.go:130] >         reload
	I0717 19:04:12.697224  228393 command_runner.go:130] >         loadbalance
	I0717 19:04:12.697230  228393 command_runner.go:130] >     }
	I0717 19:04:12.697236  228393 command_runner.go:130] > kind: ConfigMap
	I0717 19:04:12.697241  228393 command_runner.go:130] > metadata:
	I0717 19:04:12.697257  228393 command_runner.go:130] >   creationTimestamp: "2023-07-17T19:03:59Z"
	I0717 19:04:12.697263  228393 command_runner.go:130] >   name: coredns
	I0717 19:04:12.697273  228393 command_runner.go:130] >   namespace: kube-system
	I0717 19:04:12.697283  228393 command_runner.go:130] >   resourceVersion: "257"
	I0717 19:04:12.697300  228393 command_runner.go:130] >   uid: 82c8b0bd-58cb-4b0e-bfb1-a9d9284463fe
	I0717 19:04:12.697515  228393 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 19:04:12.781395  228393 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 19:04:12.885730  228393 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 19:04:13.132267  228393 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0717 19:04:13.132302  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:13.132315  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:13.132325  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:13.162917  228393 round_trippers.go:574] Response Status: 200 OK in 30 milliseconds
	I0717 19:04:13.162945  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:13.162956  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:13.162966  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:13.162975  228393 round_trippers.go:580]     Content-Length: 291
	I0717 19:04:13.162984  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:13 GMT
	I0717 19:04:13.162991  228393 round_trippers.go:580]     Audit-Id: b1bfc812-c21b-44a8-af39-1cfc5f08bc11
	I0717 19:04:13.163000  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:13.163012  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:13.163046  228393 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0a0aff40-ae4c-45a2-85d1-4b9fe202ee82","resourceVersion":"375","creationTimestamp":"2023-07-17T19:03:59Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0717 19:04:13.163173  228393 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-549411" context rescaled to 1 replicas
	I0717 19:04:13.163227  228393 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:04:13.165588  228393 out.go:177] * Verifying Kubernetes components...
	I0717 19:04:13.167125  228393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:04:13.589122  228393 command_runner.go:130] > configmap/coredns replaced
	I0717 19:04:13.589163  228393 start.go:917] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0717 19:04:13.872181  228393 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0717 19:04:13.877331  228393 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0717 19:04:13.884103  228393 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0717 19:04:13.895106  228393 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0717 19:04:13.901528  228393 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0717 19:04:13.908944  228393 command_runner.go:130] > pod/storage-provisioner created
	I0717 19:04:13.913388  228393 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.131951668s)
	I0717 19:04:13.913477  228393 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0717 19:04:13.913512  228393 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.027753156s)
	I0717 19:04:13.915520  228393 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0717 19:04:13.913995  228393 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16890-138069/kubeconfig
	I0717 19:04:13.917546  228393 addons.go:502] enable addons completed in 1.307518948s: enabled=[storage-provisioner default-storageclass]
	I0717 19:04:13.917770  228393 kapi.go:59] client config for multinode-549411: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/client.key", CAFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 19:04:13.918012  228393 node_ready.go:35] waiting up to 6m0s for node "multinode-549411" to be "Ready" ...
	I0717 19:04:13.918078  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:13.918085  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:13.918092  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:13.918101  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:13.920011  228393 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 19:04:13.920027  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:13.920034  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:13.920040  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:13 GMT
	I0717 19:04:13.920045  228393 round_trippers.go:580]     Audit-Id: 18b4666b-2278-4697-8864-cd6bb28dca99
	I0717 19:04:13.920050  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:13.920057  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:13.920066  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:13.920169  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:14.421488  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:14.421517  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:14.421529  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:14.421539  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:14.423896  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:14.423915  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:14.423922  228393 round_trippers.go:580]     Audit-Id: c0d4aae6-b08c-46d3-a946-8bf4f359e4e6
	I0717 19:04:14.423928  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:14.423933  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:14.423939  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:14.423944  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:14.423950  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:14 GMT
	I0717 19:04:14.424075  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:14.921262  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:14.921284  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:14.921292  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:14.921298  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:14.923528  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:14.923550  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:14.923559  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:14.923568  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:14.923576  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:14.923586  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:14 GMT
	I0717 19:04:14.923596  228393 round_trippers.go:580]     Audit-Id: b46a1953-8475-4b38-af42-6f75178a56ef
	I0717 19:04:14.923602  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:14.923768  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:15.421279  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:15.421301  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:15.421309  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:15.421316  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:15.423767  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:15.423788  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:15.423795  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:15.423801  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:15.423807  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:15.423815  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:15 GMT
	I0717 19:04:15.423820  228393 round_trippers.go:580]     Audit-Id: f25051cb-eaa4-4d57-b22c-a40759fecb30
	I0717 19:04:15.423826  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:15.423967  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:15.921591  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:15.921619  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:15.921627  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:15.921633  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:15.924214  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:15.924245  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:15.924256  228393 round_trippers.go:580]     Audit-Id: d8b1fb3c-4de5-4fde-ba70-f2410fa6339f
	I0717 19:04:15.924266  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:15.924277  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:15.924286  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:15.924296  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:15.924309  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:15 GMT
	I0717 19:04:15.924393  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:15.924747  228393 node_ready.go:58] node "multinode-549411" has status "Ready":"False"
	I0717 19:04:16.420964  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:16.420991  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:16.421004  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:16.421013  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:16.423427  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:16.423451  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:16.423462  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:16.423470  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:16.423479  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:16.423488  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:16.423498  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:16 GMT
	I0717 19:04:16.423507  228393 round_trippers.go:580]     Audit-Id: 3734b9c3-c2c5-4301-b865-7994033db642
	I0717 19:04:16.423595  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:16.921215  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:16.921237  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:16.921255  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:16.921261  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:16.923863  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:16.923895  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:16.923907  228393 round_trippers.go:580]     Audit-Id: 88853fdd-540d-4917-8964-e3aa8815a620
	I0717 19:04:16.923916  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:16.923929  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:16.923941  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:16.923950  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:16.923960  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:16 GMT
	I0717 19:04:16.924110  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:17.421063  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:17.421083  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:17.421094  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:17.421100  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:17.423535  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:17.423557  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:17.423566  228393 round_trippers.go:580]     Audit-Id: 08999d84-618d-4a9d-a1ef-fb82129357d1
	I0717 19:04:17.423572  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:17.423579  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:17.423588  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:17.423598  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:17.423606  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:17 GMT
	I0717 19:04:17.423836  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:17.920981  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:17.921004  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:17.921014  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:17.921021  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:17.923332  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:17.923358  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:17.923367  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:17.923373  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:17.923378  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:17.923386  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:17.923392  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:17 GMT
	I0717 19:04:17.923397  228393 round_trippers.go:580]     Audit-Id: 0eac5811-b50e-4d8c-8992-5d56c6602b8c
	I0717 19:04:17.923498  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:18.421026  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:18.421053  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:18.421065  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:18.421073  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:18.423696  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:18.423718  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:18.423726  228393 round_trippers.go:580]     Audit-Id: 03a2e273-1099-4acf-ba6b-bc7042cd6182
	I0717 19:04:18.423731  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:18.423737  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:18.423743  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:18.423754  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:18.423763  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:18 GMT
	I0717 19:04:18.424027  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:18.424417  228393 node_ready.go:58] node "multinode-549411" has status "Ready":"False"
	I0717 19:04:18.921216  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:18.921242  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:18.921250  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:18.921257  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:18.923717  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:18.923742  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:18.923751  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:18.923758  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:18.923766  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:18.923775  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:18.923784  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:18 GMT
	I0717 19:04:18.923797  228393 round_trippers.go:580]     Audit-Id: 18120680-723f-42ca-8ae2-bde17b7a0e3d
	I0717 19:04:18.923941  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:19.421655  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:19.421684  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:19.421693  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:19.421699  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:19.424318  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:19.424348  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:19.424360  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:19 GMT
	I0717 19:04:19.424369  228393 round_trippers.go:580]     Audit-Id: 1d439547-3f30-4cfa-b6e6-d04a255dec46
	I0717 19:04:19.424376  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:19.424389  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:19.424402  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:19.424414  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:19.424569  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:19.920967  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:19.920992  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:19.921002  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:19.921009  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:19.923780  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:19.923814  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:19.923827  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:19 GMT
	I0717 19:04:19.923838  228393 round_trippers.go:580]     Audit-Id: e1e610d5-1e9c-49df-a552-ec596d1dbc1a
	I0717 19:04:19.923848  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:19.923861  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:19.923879  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:19.923889  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:19.924042  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:20.421258  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:20.421280  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:20.421288  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:20.421294  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:20.423810  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:20.423835  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:20.423844  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:20 GMT
	I0717 19:04:20.423852  228393 round_trippers.go:580]     Audit-Id: e78dd9cd-6a97-48d7-82fa-a8ab07710d9a
	I0717 19:04:20.423861  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:20.423870  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:20.423879  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:20.423888  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:20.424059  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:20.424516  228393 node_ready.go:58] node "multinode-549411" has status "Ready":"False"
	I0717 19:04:20.921244  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:20.921265  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:20.921274  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:20.921280  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:20.923632  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:20.923660  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:20.923671  228393 round_trippers.go:580]     Audit-Id: 80587a47-ba31-48d3-90c2-9873f895d502
	I0717 19:04:20.923680  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:20.923689  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:20.923698  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:20.923709  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:20.923721  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:20 GMT
	I0717 19:04:20.923935  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:21.421568  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:21.421594  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:21.421606  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:21.421614  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:21.423940  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:21.423968  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:21.423992  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:21.424002  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:21.424010  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:21.424017  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:21.424025  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:21 GMT
	I0717 19:04:21.424031  228393 round_trippers.go:580]     Audit-Id: fd46594d-9310-4f2a-b27d-357704853ed0
	I0717 19:04:21.424151  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:21.921223  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:21.921245  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:21.921254  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:21.921261  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:21.923714  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:21.923740  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:21.923751  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:21.923760  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:21 GMT
	I0717 19:04:21.923768  228393 round_trippers.go:580]     Audit-Id: 1655d19e-23fd-4916-bf2e-ca0c877ef0c6
	I0717 19:04:21.923775  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:21.923782  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:21.923791  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:21.923891  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:22.421674  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:22.421694  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:22.421703  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:22.421709  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:22.424106  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:22.424131  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:22.424141  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:22.424147  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:22.424156  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:22 GMT
	I0717 19:04:22.424165  228393 round_trippers.go:580]     Audit-Id: e10ecac2-eca0-4918-b90c-aeb6c6a40270
	I0717 19:04:22.424173  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:22.424181  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:22.424433  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:22.424808  228393 node_ready.go:58] node "multinode-549411" has status "Ready":"False"
	I0717 19:04:22.920999  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:22.921024  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:22.921033  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:22.921040  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:22.923510  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:22.923530  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:22.923538  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:22.923544  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:22 GMT
	I0717 19:04:22.923549  228393 round_trippers.go:580]     Audit-Id: af05a726-a05d-4cce-858b-24748aba5568
	I0717 19:04:22.923554  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:22.923562  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:22.923570  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:22.923695  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:23.421267  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:23.421289  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:23.421298  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:23.421306  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:23.423803  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:23.423827  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:23.423836  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:23.423844  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:23.423852  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:23 GMT
	I0717 19:04:23.423862  228393 round_trippers.go:580]     Audit-Id: c710b502-8753-48d0-a9fc-52b8ee97ed24
	I0717 19:04:23.423872  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:23.423885  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:23.424121  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:23.921767  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:23.921793  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:23.921806  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:23.921815  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:23.924162  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:23.924186  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:23.924193  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:23.924200  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:23 GMT
	I0717 19:04:23.924205  228393 round_trippers.go:580]     Audit-Id: 5d1f2924-33d1-43c2-b2b2-230284a7b4a6
	I0717 19:04:23.924210  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:23.924216  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:23.924221  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:23.924332  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:24.420930  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:24.420953  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:24.420966  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:24.420973  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:24.423388  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:24.423421  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:24.423432  228393 round_trippers.go:580]     Audit-Id: 95427c68-884b-4113-99e2-3500afa19ebe
	I0717 19:04:24.423442  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:24.423453  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:24.423460  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:24.423468  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:24.423473  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:24 GMT
	I0717 19:04:24.423604  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:24.920929  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:24.920949  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:24.920958  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:24.920964  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:24.923390  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:24.923410  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:24.923417  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:24.923426  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:24 GMT
	I0717 19:04:24.923435  228393 round_trippers.go:580]     Audit-Id: 1f179d61-347d-4de8-b992-ad5bf32071fa
	I0717 19:04:24.923445  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:24.923458  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:24.923470  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:24.923582  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:24.924040  228393 node_ready.go:58] node "multinode-549411" has status "Ready":"False"
	I0717 19:04:25.420923  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:25.420942  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:25.420950  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:25.420957  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:25.423488  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:25.423515  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:25.423526  228393 round_trippers.go:580]     Audit-Id: 9c38379c-16b3-4def-a3c7-ff7cf30df732
	I0717 19:04:25.423535  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:25.423544  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:25.423554  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:25.423563  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:25.423575  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:25 GMT
	I0717 19:04:25.423716  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:25.920958  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:25.920981  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:25.920989  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:25.920995  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:25.923432  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:25.923451  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:25.923459  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:25.923465  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:25 GMT
	I0717 19:04:25.923471  228393 round_trippers.go:580]     Audit-Id: 782d7222-7b8c-4c70-a092-e8546622ec3f
	I0717 19:04:25.923477  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:25.923485  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:25.923494  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:25.923680  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:26.420960  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:26.420982  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:26.420991  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:26.420997  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:26.423509  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:26.423529  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:26.423536  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:26.423542  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:26.423547  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:26.423553  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:26.423558  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:26 GMT
	I0717 19:04:26.423563  228393 round_trippers.go:580]     Audit-Id: 8261fc7b-e832-4f97-8087-13ccba59f00d
	I0717 19:04:26.423716  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:26.920932  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:26.920953  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:26.920962  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:26.920968  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:26.923304  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:26.923322  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:26.923330  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:26.923336  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:26.923341  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:26.923346  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:26.923352  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:26 GMT
	I0717 19:04:26.923357  228393 round_trippers.go:580]     Audit-Id: e83b4f73-7913-47b5-a9c2-69c9134d6564
	I0717 19:04:26.923504  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:27.421471  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:27.421495  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:27.421509  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:27.421518  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:27.423937  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:27.423959  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:27.423966  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:27.423992  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:27.424002  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:27 GMT
	I0717 19:04:27.424012  228393 round_trippers.go:580]     Audit-Id: ce73c627-ddc9-461f-8075-97a202abf574
	I0717 19:04:27.424020  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:27.424029  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:27.424161  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:27.424597  228393 node_ready.go:58] node "multinode-549411" has status "Ready":"False"
	I0717 19:04:27.921848  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:27.921870  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:27.921879  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:27.921885  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:27.924218  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:27.924237  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:27.924245  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:27.924251  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:27.924257  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:27.924262  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:27.924268  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:27 GMT
	I0717 19:04:27.924280  228393 round_trippers.go:580]     Audit-Id: 3a07a926-d3ee-4b04-8e15-8c60da1337df
	I0717 19:04:27.924408  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:28.420980  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:28.421003  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:28.421011  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:28.421017  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:28.423392  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:28.423413  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:28.423420  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:28.423430  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:28.423440  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:28 GMT
	I0717 19:04:28.423448  228393 round_trippers.go:580]     Audit-Id: 9a822209-3bf9-48f1-9383-9603c5b61961
	I0717 19:04:28.423457  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:28.423470  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:28.423628  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:28.921196  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:28.921221  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:28.921232  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:28.921240  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:28.923587  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:28.923612  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:28.923622  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:28.923630  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:28.923639  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:28.923647  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:28 GMT
	I0717 19:04:28.923657  228393 round_trippers.go:580]     Audit-Id: 7e1269b9-f387-4607-8e64-94fbb7dfce80
	I0717 19:04:28.923666  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:28.923760  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:29.421448  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:29.421475  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:29.421486  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:29.421496  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:29.423907  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:29.423929  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:29.423936  228393 round_trippers.go:580]     Audit-Id: d76cba66-9a58-46e6-a2f8-d39eafad64c7
	I0717 19:04:29.423942  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:29.423947  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:29.423952  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:29.423957  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:29.423963  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:29 GMT
	I0717 19:04:29.424104  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:29.921226  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:29.921256  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:29.921264  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:29.921271  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:29.923627  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:29.923652  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:29.923662  228393 round_trippers.go:580]     Audit-Id: b1bee8fa-02fd-4f6d-b828-42c9c1a50d1b
	I0717 19:04:29.923672  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:29.923679  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:29.923688  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:29.923697  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:29.923706  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:29 GMT
	I0717 19:04:29.923803  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:29.924147  228393 node_ready.go:58] node "multinode-549411" has status "Ready":"False"
	I0717 19:04:30.421483  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:30.421503  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:30.421512  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:30.421518  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:30.424139  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:30.424159  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:30.424170  228393 round_trippers.go:580]     Audit-Id: fc5d7142-efc8-4fef-97e8-aa016814eb34
	I0717 19:04:30.424178  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:30.424187  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:30.424196  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:30.424204  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:30.424214  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:30 GMT
	I0717 19:04:30.424361  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:30.920910  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:30.920933  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:30.920942  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:30.920948  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:30.923312  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:30.923338  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:30.923349  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:30.923358  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:30.923372  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:30.923381  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:30.923390  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:30 GMT
	I0717 19:04:30.923400  228393 round_trippers.go:580]     Audit-Id: 6ccfc549-e85d-4c02-8bcf-bb8cf499ac4d
	I0717 19:04:30.923531  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:31.421083  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:31.421104  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:31.421112  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:31.421118  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:31.423543  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:31.423563  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:31.423570  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:31.423576  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:31 GMT
	I0717 19:04:31.423581  228393 round_trippers.go:580]     Audit-Id: 40abb881-8e6d-45b2-9894-758377f94cd7
	I0717 19:04:31.423587  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:31.423592  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:31.423598  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:31.423757  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:31.921253  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:31.921281  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:31.921289  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:31.921295  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:31.923723  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:31.923746  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:31.923753  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:31.923760  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:31.923768  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:31.923776  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:31 GMT
	I0717 19:04:31.923785  228393 round_trippers.go:580]     Audit-Id: 95662038-4499-42f4-903d-8c1b065593fb
	I0717 19:04:31.923804  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:31.923967  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:31.924451  228393 node_ready.go:58] node "multinode-549411" has status "Ready":"False"
	I0717 19:04:32.421697  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:32.421720  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:32.421742  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:32.421753  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:32.424062  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:32.424086  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:32.424098  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:32 GMT
	I0717 19:04:32.424107  228393 round_trippers.go:580]     Audit-Id: 2928177d-3b31-4fe6-b6d7-1ccdaa799d8b
	I0717 19:04:32.424115  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:32.424123  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:32.424133  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:32.424146  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:32.424321  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:32.921010  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:32.921038  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:32.921048  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:32.921063  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:32.923392  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:32.923418  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:32.923429  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:32.923439  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:32.923447  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:32.923455  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:32.923464  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:32 GMT
	I0717 19:04:32.923474  228393 round_trippers.go:580]     Audit-Id: 5e993ba0-5ed3-456c-95d2-b828b8b5f0ff
	I0717 19:04:32.923629  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:33.421162  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:33.421190  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:33.421203  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:33.421210  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:33.424762  228393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:04:33.424834  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:33.424850  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:33.424866  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:33.424874  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:33 GMT
	I0717 19:04:33.424885  228393 round_trippers.go:580]     Audit-Id: ffff0765-bab6-4aff-81d8-3b436ee90386
	I0717 19:04:33.424893  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:33.424904  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:33.425068  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:33.921735  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:33.921758  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:33.921770  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:33.921776  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:33.924097  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:33.924132  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:33.924142  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:33.924151  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:33.924158  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:33 GMT
	I0717 19:04:33.924168  228393 round_trippers.go:580]     Audit-Id: d0a7d696-5f34-4d11-b1af-0eef424ef689
	I0717 19:04:33.924181  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:33.924198  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:33.924364  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:33.924689  228393 node_ready.go:58] node "multinode-549411" has status "Ready":"False"
	I0717 19:04:34.420866  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:34.420904  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:34.420913  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:34.420919  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:34.423168  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:34.423192  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:34.423201  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:34.423209  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:34 GMT
	I0717 19:04:34.423217  228393 round_trippers.go:580]     Audit-Id: 4254960e-9501-451f-a0e7-d3e2e9f20bb4
	I0717 19:04:34.423225  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:34.423235  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:34.423247  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:34.423403  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:34.920994  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:34.921021  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:34.921029  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:34.921035  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:34.923607  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:34.923634  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:34.923645  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:34 GMT
	I0717 19:04:34.923654  228393 round_trippers.go:580]     Audit-Id: 58f777dd-2ca9-4bd5-be02-e9b48854f513
	I0717 19:04:34.923664  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:34.923673  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:34.923682  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:34.923688  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:34.923805  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:35.421383  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:35.421407  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:35.421415  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:35.421422  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:35.424057  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:35.424084  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:35.424097  228393 round_trippers.go:580]     Audit-Id: 5f954fe4-fa85-49bf-bfcf-160cf80fab26
	I0717 19:04:35.424105  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:35.424111  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:35.424119  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:35.424124  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:35.424130  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:35 GMT
	I0717 19:04:35.424293  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:35.921838  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:35.921860  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:35.921868  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:35.921874  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:35.924107  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:35.924128  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:35.924139  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:35.924148  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:35.924157  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:35 GMT
	I0717 19:04:35.924164  228393 round_trippers.go:580]     Audit-Id: 0a80c544-45cb-4c0a-a53e-6a286dd194ac
	I0717 19:04:35.924172  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:35.924184  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:35.924295  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:36.420870  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:36.420892  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:36.420900  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:36.420906  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:36.423385  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:36.423413  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:36.423425  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:36.423434  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:36 GMT
	I0717 19:04:36.423443  228393 round_trippers.go:580]     Audit-Id: f4e2b52a-22fe-4123-bdcc-0e2586bfee47
	I0717 19:04:36.423450  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:36.423466  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:36.423475  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:36.423623  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:36.423961  228393 node_ready.go:58] node "multinode-549411" has status "Ready":"False"
	I0717 19:04:36.921102  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:36.921122  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:36.921132  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:36.921138  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:36.923461  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:36.923478  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:36.923485  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:36 GMT
	I0717 19:04:36.923491  228393 round_trippers.go:580]     Audit-Id: 5724fac4-9d76-4c4f-aafd-66f16c97d6d7
	I0717 19:04:36.923496  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:36.923501  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:36.923507  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:36.923515  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:36.923645  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:37.421640  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:37.421660  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:37.421669  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:37.421675  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:37.423893  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:37.423917  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:37.423925  228393 round_trippers.go:580]     Audit-Id: d040aa68-ecef-4948-b4e4-cb7363ab1865
	I0717 19:04:37.423931  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:37.423936  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:37.423941  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:37.423947  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:37.423954  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:37 GMT
	I0717 19:04:37.424093  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:37.921741  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:37.921766  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:37.921777  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:37.921787  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:37.924166  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:37.924191  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:37.924202  228393 round_trippers.go:580]     Audit-Id: 584e296f-9e12-4230-b76f-860d0d7b7ce8
	I0717 19:04:37.924212  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:37.924220  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:37.924227  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:37.924236  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:37.924247  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:37 GMT
	I0717 19:04:37.924377  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:38.420950  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:38.420977  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:38.420986  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:38.420992  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:38.423282  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:38.423303  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:38.423314  228393 round_trippers.go:580]     Audit-Id: 595ea149-ccbb-47be-a20e-c22e8e5bcfbb
	I0717 19:04:38.423321  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:38.423328  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:38.423336  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:38.423344  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:38.423353  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:38 GMT
	I0717 19:04:38.423497  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:38.921075  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:38.921096  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:38.921105  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:38.921111  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:38.923399  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:38.923422  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:38.923432  228393 round_trippers.go:580]     Audit-Id: bb6af417-7ef8-46e8-8032-a57e15377fed
	I0717 19:04:38.923439  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:38.923447  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:38.923454  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:38.923463  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:38.923476  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:38 GMT
	I0717 19:04:38.923597  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:38.923946  228393 node_ready.go:58] node "multinode-549411" has status "Ready":"False"
	I0717 19:04:39.421199  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:39.421219  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:39.421228  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:39.421234  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:39.423562  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:39.423582  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:39.423589  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:39.423597  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:39 GMT
	I0717 19:04:39.423608  228393 round_trippers.go:580]     Audit-Id: 2cf23d1d-b9d2-46f8-82d1-5789485a3da8
	I0717 19:04:39.423618  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:39.423627  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:39.423635  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:39.423802  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:39.921653  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:39.921679  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:39.921692  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:39.921703  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:39.924050  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:39.924073  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:39.924080  228393 round_trippers.go:580]     Audit-Id: bb6884f6-fd5f-4524-a29c-67e469b90b5c
	I0717 19:04:39.924086  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:39.924091  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:39.924096  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:39.924102  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:39.924107  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:39 GMT
	I0717 19:04:39.924228  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:40.420851  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:40.420872  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:40.420881  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:40.420887  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:40.423186  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:40.423210  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:40.423218  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:40.423224  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:40.423229  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:40 GMT
	I0717 19:04:40.423235  228393 round_trippers.go:580]     Audit-Id: 2cb844f4-3213-4909-8a31-24e27d877642
	I0717 19:04:40.423240  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:40.423252  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:40.423392  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:40.921009  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:40.921031  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:40.921043  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:40.921053  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:40.923714  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:40.923738  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:40.923749  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:40.923759  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:40.923768  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:40 GMT
	I0717 19:04:40.923776  228393 round_trippers.go:580]     Audit-Id: e915b08c-f211-44ea-bbb3-9438ef1d556f
	I0717 19:04:40.923784  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:40.923796  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:40.923919  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:40.924268  228393 node_ready.go:58] node "multinode-549411" has status "Ready":"False"
	I0717 19:04:41.421213  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:41.421233  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:41.421241  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:41.421247  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:41.423441  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:41.423459  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:41.423467  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:41.423472  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:41.423480  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:41.423488  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:41 GMT
	I0717 19:04:41.423499  228393 round_trippers.go:580]     Audit-Id: 74e1df1c-0fea-4bfa-a9c6-abdaabd723d2
	I0717 19:04:41.423510  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:41.423668  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:41.921214  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:41.921239  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:41.921248  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:41.921254  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:41.923786  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:41.923807  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:41.923815  228393 round_trippers.go:580]     Audit-Id: ef29b12a-df28-4168-8523-50ea7d08fcde
	I0717 19:04:41.923821  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:41.923826  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:41.923832  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:41.923837  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:41.923842  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:41 GMT
	I0717 19:04:41.924038  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:42.421728  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:42.421756  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:42.421771  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:42.421782  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:42.424468  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:42.424487  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:42.424494  228393 round_trippers.go:580]     Audit-Id: ca698f35-d70a-43ff-8275-61d78616e418
	I0717 19:04:42.424506  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:42.424512  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:42.424520  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:42.424529  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:42.424534  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:42 GMT
	I0717 19:04:42.424706  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:42.921703  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:42.921727  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:42.921735  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:42.921745  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:42.924461  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:42.924488  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:42.924498  228393 round_trippers.go:580]     Audit-Id: 2db4abee-c66a-4b38-9984-40a9040d1f48
	I0717 19:04:42.924508  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:42.924517  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:42.924527  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:42.924536  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:42.924548  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:42 GMT
	I0717 19:04:42.924708  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:42.925066  228393 node_ready.go:58] node "multinode-549411" has status "Ready":"False"
	I0717 19:04:43.421160  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:43.421181  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:43.421189  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:43.421196  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:43.423659  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:43.423678  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:43.423686  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:43.423699  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:43.423708  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:43.423718  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:43 GMT
	I0717 19:04:43.423728  228393 round_trippers.go:580]     Audit-Id: 610ba887-001a-4acd-8f39-e3c522a7ba93
	I0717 19:04:43.423737  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:43.423907  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:43.921221  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:43.921241  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:43.921249  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:43.921257  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:43.923470  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:43.923487  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:43.923495  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:43 GMT
	I0717 19:04:43.923501  228393 round_trippers.go:580]     Audit-Id: e8719596-92eb-48cd-900f-8199c1a2f684
	I0717 19:04:43.923506  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:43.923511  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:43.923516  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:43.923523  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:43.923674  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:44.421333  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:44.421356  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:44.421364  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:44.421370  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:44.424120  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:44.424145  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:44.424157  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:44.424166  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:44.424173  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:44 GMT
	I0717 19:04:44.424184  228393 round_trippers.go:580]     Audit-Id: 657fa501-aeb2-4396-8e56-023115baa46b
	I0717 19:04:44.424196  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:44.424204  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:44.424354  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"354","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0717 19:04:44.921554  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:44.921575  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:44.921583  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:44.921589  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:44.924092  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:44.924113  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:44.924120  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:44.924126  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:44 GMT
	I0717 19:04:44.924132  228393 round_trippers.go:580]     Audit-Id: 1203c052-eb00-4126-b9b7-be4b9b192522
	I0717 19:04:44.924141  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:44.924149  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:44.924164  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:44.924305  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"424","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0717 19:04:44.924728  228393 node_ready.go:49] node "multinode-549411" has status "Ready":"True"
	I0717 19:04:44.924756  228393 node_ready.go:38] duration metric: took 31.006728908s waiting for node "multinode-549411" to be "Ready" ...
	I0717 19:04:44.924770  228393 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:04:44.924860  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0717 19:04:44.924872  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:44.924881  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:44.924888  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:44.928563  228393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:04:44.928639  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:44.928666  228393 round_trippers.go:580]     Audit-Id: c09a0f8b-6d3a-4da4-8d2a-fe1591ec7abf
	I0717 19:04:44.928686  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:44.928708  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:44.928719  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:44.928729  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:44.928744  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:44 GMT
	I0717 19:04:44.929276  228393 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"coredns-5d78c9869d-98dl8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"0b962161-8aa7-48e3-bfab-c96b8fcdeb95","resourceVersion":"429","creationTimestamp":"2023-07-17T19:04:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ff6caf3c-f3bb-45e6-87e6-31a61699767c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:04:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ff6caf3c-f3bb-45e6-87e6-31a61699767c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55535 chars]
	I0717 19:04:44.933943  228393 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-98dl8" in "kube-system" namespace to be "Ready" ...
	I0717 19:04:44.934055  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-98dl8
	I0717 19:04:44.934066  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:44.934078  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:44.934088  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:44.936630  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:44.936655  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:44.936666  228393 round_trippers.go:580]     Audit-Id: 0f6e4ea3-f083-464a-a6e6-9e0ad3256207
	I0717 19:04:44.936674  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:44.936682  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:44.936690  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:44.936705  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:44.936714  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:44 GMT
	I0717 19:04:44.936818  228393 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-98dl8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"0b962161-8aa7-48e3-bfab-c96b8fcdeb95","resourceVersion":"429","creationTimestamp":"2023-07-17T19:04:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ff6caf3c-f3bb-45e6-87e6-31a61699767c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:04:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ff6caf3c-f3bb-45e6-87e6-31a61699767c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0717 19:04:44.937195  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:44.937201  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:44.937208  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:44.937214  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:44.939435  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:44.939455  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:44.939467  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:44.939475  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:44.939482  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:44.939490  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:44 GMT
	I0717 19:04:44.939497  228393 round_trippers.go:580]     Audit-Id: a4584c9d-04c0-4b4f-9761-3c2369406a0c
	I0717 19:04:44.939505  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:44.939609  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"424","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0717 19:04:45.440658  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-98dl8
	I0717 19:04:45.440679  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:45.440688  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:45.440705  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:45.443229  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:45.443253  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:45.443263  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:45.443271  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:45 GMT
	I0717 19:04:45.443278  228393 round_trippers.go:580]     Audit-Id: cd0499af-2b3e-457c-9f8f-ade0e86dd5c5
	I0717 19:04:45.443288  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:45.443300  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:45.443312  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:45.443492  228393 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-98dl8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"0b962161-8aa7-48e3-bfab-c96b8fcdeb95","resourceVersion":"429","creationTimestamp":"2023-07-17T19:04:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ff6caf3c-f3bb-45e6-87e6-31a61699767c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:04:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ff6caf3c-f3bb-45e6-87e6-31a61699767c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0717 19:04:45.444017  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:45.444033  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:45.444043  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:45.444052  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:45.446018  228393 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 19:04:45.446034  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:45.446040  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:45.446045  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:45 GMT
	I0717 19:04:45.446051  228393 round_trippers.go:580]     Audit-Id: e2a5ce40-8d2b-4506-89e7-97acc5a1f215
	I0717 19:04:45.446064  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:45.446074  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:45.446082  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:45.446290  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"424","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0717 19:04:45.940846  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-98dl8
	I0717 19:04:45.940878  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:45.940889  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:45.940898  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:45.943202  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:45.943228  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:45.943240  228393 round_trippers.go:580]     Audit-Id: 730911ff-ec89-4836-8f8a-6e0363827b43
	I0717 19:04:45.943250  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:45.943259  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:45.943268  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:45.943277  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:45.943284  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:45 GMT
	I0717 19:04:45.943408  228393 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-98dl8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"0b962161-8aa7-48e3-bfab-c96b8fcdeb95","resourceVersion":"440","creationTimestamp":"2023-07-17T19:04:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ff6caf3c-f3bb-45e6-87e6-31a61699767c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:04:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ff6caf3c-f3bb-45e6-87e6-31a61699767c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0717 19:04:45.943924  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:45.943937  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:45.943945  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:45.943951  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:45.945977  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:45.945993  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:45.946003  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:45.946012  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:45.946021  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:45.946030  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:45 GMT
	I0717 19:04:45.946040  228393 round_trippers.go:580]     Audit-Id: b6038078-9268-4e44-8bd0-2ea2cc5fecd1
	I0717 19:04:45.946046  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:45.946163  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"424","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0717 19:04:45.946498  228393 pod_ready.go:92] pod "coredns-5d78c9869d-98dl8" in "kube-system" namespace has status "Ready":"True"
	I0717 19:04:45.946514  228393 pod_ready.go:81] duration metric: took 1.012543534s waiting for pod "coredns-5d78c9869d-98dl8" in "kube-system" namespace to be "Ready" ...
	I0717 19:04:45.946523  228393 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-549411" in "kube-system" namespace to be "Ready" ...
	I0717 19:04:45.946571  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-549411
	I0717 19:04:45.946580  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:45.946587  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:45.946593  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:45.948418  228393 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 19:04:45.948434  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:45.948440  228393 round_trippers.go:580]     Audit-Id: 31404ba3-1bd9-4860-8cfc-4bd9c01d7ceb
	I0717 19:04:45.948446  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:45.948451  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:45.948456  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:45.948463  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:45.948468  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:45 GMT
	I0717 19:04:45.948589  228393 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-549411","namespace":"kube-system","uid":"b8bd6e94-7419-4088-922a-844632299e1c","resourceVersion":"304","creationTimestamp":"2023-07-17T19:03:59Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"c5621a2a52a4124e1c104e10aea0070e","kubernetes.io/config.mirror":"c5621a2a52a4124e1c104e10aea0070e","kubernetes.io/config.seen":"2023-07-17T19:03:59.528917007Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:03:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0717 19:04:45.948951  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:45.948965  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:45.948972  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:45.948979  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:45.950690  228393 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 19:04:45.950709  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:45.950718  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:45.950727  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:45.950734  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:45.950742  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:45 GMT
	I0717 19:04:45.950752  228393 round_trippers.go:580]     Audit-Id: 631d8047-d2d6-4c3d-820f-09ee8cd23601
	I0717 19:04:45.950761  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:45.950853  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"424","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0717 19:04:45.951111  228393 pod_ready.go:92] pod "etcd-multinode-549411" in "kube-system" namespace has status "Ready":"True"
	I0717 19:04:45.951123  228393 pod_ready.go:81] duration metric: took 4.594876ms waiting for pod "etcd-multinode-549411" in "kube-system" namespace to be "Ready" ...
	I0717 19:04:45.951133  228393 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-549411" in "kube-system" namespace to be "Ready" ...
	I0717 19:04:45.951174  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-549411
	I0717 19:04:45.951181  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:45.951187  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:45.951194  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:45.953091  228393 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 19:04:45.953109  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:45.953119  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:45.953127  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:45.953141  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:45.953154  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:45.953163  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:45 GMT
	I0717 19:04:45.953176  228393 round_trippers.go:580]     Audit-Id: 23191ad5-e4fd-463a-a015-86ac7e670728
	I0717 19:04:45.953302  228393 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-549411","namespace":"kube-system","uid":"b26f076b-6354-45ef-b7c2-c8ff8b7dbc15","resourceVersion":"318","creationTimestamp":"2023-07-17T19:03:59Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"a8b15805fc8c0e859b710d18c398b2d8","kubernetes.io/config.mirror":"a8b15805fc8c0e859b710d18c398b2d8","kubernetes.io/config.seen":"2023-07-17T19:03:59.528920779Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:03:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0717 19:04:45.953692  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:45.953704  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:45.953711  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:45.953717  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:45.955298  228393 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 19:04:45.955319  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:45.955330  228393 round_trippers.go:580]     Audit-Id: aea66993-8133-4ecb-b24e-99d23808cf25
	I0717 19:04:45.955337  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:45.955343  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:45.955348  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:45.955355  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:45.955364  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:45 GMT
	I0717 19:04:45.955470  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"424","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0717 19:04:45.955783  228393 pod_ready.go:92] pod "kube-apiserver-multinode-549411" in "kube-system" namespace has status "Ready":"True"
	I0717 19:04:45.955796  228393 pod_ready.go:81] duration metric: took 4.657503ms waiting for pod "kube-apiserver-multinode-549411" in "kube-system" namespace to be "Ready" ...
	I0717 19:04:45.955805  228393 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-549411" in "kube-system" namespace to be "Ready" ...
	I0717 19:04:45.955854  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-549411
	I0717 19:04:45.955862  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:45.955869  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:45.955875  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:45.957569  228393 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 19:04:45.957585  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:45.957595  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:45.957602  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:45.957611  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:45 GMT
	I0717 19:04:45.957620  228393 round_trippers.go:580]     Audit-Id: ecd1faaf-8d7d-4005-b21a-1668935f716c
	I0717 19:04:45.957630  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:45.957646  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:45.957747  228393 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-549411","namespace":"kube-system","uid":"f4c024ba-c455-4ab3-af54-817f307a1f1a","resourceVersion":"292","creationTimestamp":"2023-07-17T19:03:59Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"67fc696688ad595b16a94c8761f652ef","kubernetes.io/config.mirror":"67fc696688ad595b16a94c8761f652ef","kubernetes.io/config.seen":"2023-07-17T19:03:59.528921974Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:03:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0717 19:04:45.958101  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:45.958114  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:45.958124  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:45.958133  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:45.959736  228393 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 19:04:45.959753  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:45.959764  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:45.959781  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:45.959795  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:45 GMT
	I0717 19:04:45.959808  228393 round_trippers.go:580]     Audit-Id: 24aa1617-d361-408d-a760-3843be9c653f
	I0717 19:04:45.959822  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:45.959835  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:45.959921  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"424","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0717 19:04:45.960194  228393 pod_ready.go:92] pod "kube-controller-manager-multinode-549411" in "kube-system" namespace has status "Ready":"True"
	I0717 19:04:45.960208  228393 pod_ready.go:81] duration metric: took 4.396823ms waiting for pod "kube-controller-manager-multinode-549411" in "kube-system" namespace to be "Ready" ...
	I0717 19:04:45.960220  228393 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hzb9w" in "kube-system" namespace to be "Ready" ...
	I0717 19:04:45.960261  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hzb9w
	I0717 19:04:45.960270  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:45.960280  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:45.960290  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:45.961956  228393 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 19:04:45.961976  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:45.961986  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:45.961996  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:45 GMT
	I0717 19:04:45.962006  228393 round_trippers.go:580]     Audit-Id: 77fdd73f-11e4-4bdf-a406-70a580c747cb
	I0717 19:04:45.962015  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:45.962028  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:45.962040  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:45.962148  228393 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hzb9w","generateName":"kube-proxy-","namespace":"kube-system","uid":"612c55b1-0ad0-4c37-80d1-931cdd2767aa","resourceVersion":"401","creationTimestamp":"2023-07-17T19:04:11Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"70320397-70b5-4707-9e0c-bffe37cfd3e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:04:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"70320397-70b5-4707-9e0c-bffe37cfd3e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0717 19:04:46.121728  228393 request.go:628] Waited for 159.228342ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:46.121814  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:46.121821  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:46.121834  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:46.121846  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:46.124206  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:46.124236  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:46.124246  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:46.124254  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:46 GMT
	I0717 19:04:46.124262  228393 round_trippers.go:580]     Audit-Id: 89472652-bd51-44c0-bd05-186f90e97c9e
	I0717 19:04:46.124269  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:46.124277  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:46.124285  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:46.124384  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"424","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0717 19:04:46.124747  228393 pod_ready.go:92] pod "kube-proxy-hzb9w" in "kube-system" namespace has status "Ready":"True"
	I0717 19:04:46.124767  228393 pod_ready.go:81] duration metric: took 164.538391ms waiting for pod "kube-proxy-hzb9w" in "kube-system" namespace to be "Ready" ...
	I0717 19:04:46.124780  228393 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-549411" in "kube-system" namespace to be "Ready" ...
	I0717 19:04:46.322139  228393 request.go:628] Waited for 197.269241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-549411
	I0717 19:04:46.322216  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-549411
	I0717 19:04:46.322224  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:46.322232  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:46.322244  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:46.325017  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:46.325046  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:46.325057  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:46.325063  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:46.325069  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:46.325074  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:46.325080  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:46 GMT
	I0717 19:04:46.325085  228393 round_trippers.go:580]     Audit-Id: 6077a0d9-0084-4faf-aecf-e6c3004bf8bc
	I0717 19:04:46.325242  228393 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-549411","namespace":"kube-system","uid":"ec3f8f05-ca8c-40fe-b852-06306bfeb4f0","resourceVersion":"325","creationTimestamp":"2023-07-17T19:03:57Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a1521f8f9a6e2a2d24ff9b0f01c1b786","kubernetes.io/config.mirror":"a1521f8f9a6e2a2d24ff9b0f01c1b786","kubernetes.io/config.seen":"2023-07-17T19:03:53.549919528Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:03:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0717 19:04:46.521990  228393 request.go:628] Waited for 196.355741ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:46.522041  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:04:46.522045  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:46.522055  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:46.522061  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:46.524258  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:46.524285  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:46.524298  228393 round_trippers.go:580]     Audit-Id: 08963ac9-8ddd-440d-8ea8-70ba42e78502
	I0717 19:04:46.524307  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:46.524317  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:46.524330  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:46.524340  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:46.524350  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:46 GMT
	I0717 19:04:46.524459  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"424","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0717 19:04:46.524822  228393 pod_ready.go:92] pod "kube-scheduler-multinode-549411" in "kube-system" namespace has status "Ready":"True"
	I0717 19:04:46.524837  228393 pod_ready.go:81] duration metric: took 400.046299ms waiting for pod "kube-scheduler-multinode-549411" in "kube-system" namespace to be "Ready" ...
	I0717 19:04:46.524847  228393 pod_ready.go:38] duration metric: took 1.60006295s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:04:46.524865  228393 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:04:46.524913  228393 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:04:46.534863  228393 command_runner.go:130] > 1422
	I0717 19:04:46.535580  228393 api_server.go:72] duration metric: took 33.372320428s to wait for apiserver process to appear ...
	I0717 19:04:46.535598  228393 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:04:46.535619  228393 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0717 19:04:46.539669  228393 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0717 19:04:46.539735  228393 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0717 19:04:46.539743  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:46.539751  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:46.539759  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:46.540842  228393 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 19:04:46.540863  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:46.540873  228393 round_trippers.go:580]     Audit-Id: 73c29532-2b60-4261-867c-be316b7e1562
	I0717 19:04:46.540879  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:46.540885  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:46.540896  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:46.540901  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:46.540909  228393 round_trippers.go:580]     Content-Length: 263
	I0717 19:04:46.540917  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:46 GMT
	I0717 19:04:46.540933  228393 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.3",
	  "gitCommit": "25b4e43193bcda6c7328a6d147b1fb73a33f1598",
	  "gitTreeState": "clean",
	  "buildDate": "2023-06-14T09:47:40Z",
	  "goVersion": "go1.20.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0717 19:04:46.541011  228393 api_server.go:141] control plane version: v1.27.3
	I0717 19:04:46.541024  228393 api_server.go:131] duration metric: took 5.420667ms to wait for apiserver health ...
	I0717 19:04:46.541031  228393 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:04:46.722462  228393 request.go:628] Waited for 181.343505ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0717 19:04:46.722524  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0717 19:04:46.722529  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:46.722538  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:46.722544  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:46.726633  228393 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0717 19:04:46.726662  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:46.726684  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:46.726694  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:46.726703  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:46 GMT
	I0717 19:04:46.726712  228393 round_trippers.go:580]     Audit-Id: 832f727b-d4b6-4f43-a115-b3427023053b
	I0717 19:04:46.726721  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:46.726731  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:46.727265  228393 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"444"},"items":[{"metadata":{"name":"coredns-5d78c9869d-98dl8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"0b962161-8aa7-48e3-bfab-c96b8fcdeb95","resourceVersion":"440","creationTimestamp":"2023-07-17T19:04:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ff6caf3c-f3bb-45e6-87e6-31a61699767c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:04:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ff6caf3c-f3bb-45e6-87e6-31a61699767c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55613 chars]
	I0717 19:04:46.729029  228393 system_pods.go:59] 8 kube-system pods found
	I0717 19:04:46.729082  228393 system_pods.go:61] "coredns-5d78c9869d-98dl8" [0b962161-8aa7-48e3-bfab-c96b8fcdeb95] Running
	I0717 19:04:46.729091  228393 system_pods.go:61] "etcd-multinode-549411" [b8bd6e94-7419-4088-922a-844632299e1c] Running
	I0717 19:04:46.729095  228393 system_pods.go:61] "kindnet-zjw42" [145336b3-84d1-459d-985a-f030ea0d3789] Running
	I0717 19:04:46.729104  228393 system_pods.go:61] "kube-apiserver-multinode-549411" [b26f076b-6354-45ef-b7c2-c8ff8b7dbc15] Running
	I0717 19:04:46.729109  228393 system_pods.go:61] "kube-controller-manager-multinode-549411" [f4c024ba-c455-4ab3-af54-817f307a1f1a] Running
	I0717 19:04:46.729114  228393 system_pods.go:61] "kube-proxy-hzb9w" [612c55b1-0ad0-4c37-80d1-931cdd2767aa] Running
	I0717 19:04:46.729119  228393 system_pods.go:61] "kube-scheduler-multinode-549411" [ec3f8f05-ca8c-40fe-b852-06306bfeb4f0] Running
	I0717 19:04:46.729123  228393 system_pods.go:61] "storage-provisioner" [405bbed8-a7bb-484e-b391-fc1e85d55700] Running
	I0717 19:04:46.729131  228393 system_pods.go:74] duration metric: took 188.09302ms to wait for pod list to return data ...
	I0717 19:04:46.729139  228393 default_sa.go:34] waiting for default service account to be created ...
	I0717 19:04:46.922559  228393 request.go:628] Waited for 193.335599ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0717 19:04:46.922623  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0717 19:04:46.922627  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:46.922641  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:46.922648  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:46.924991  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:46.925010  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:46.925017  228393 round_trippers.go:580]     Content-Length: 261
	I0717 19:04:46.925023  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:46 GMT
	I0717 19:04:46.925028  228393 round_trippers.go:580]     Audit-Id: b6cf4a1a-b9d2-4b73-8a3b-9986e07e6a8d
	I0717 19:04:46.925033  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:46.925039  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:46.925044  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:46.925049  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:46.925073  228393 request.go:1188] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"444"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"6ff53ea5-ea3f-4d33-858c-c4be98bfb619","resourceVersion":"355","creationTimestamp":"2023-07-17T19:04:12Z"}}]}
	I0717 19:04:46.925293  228393 default_sa.go:45] found service account: "default"
	I0717 19:04:46.925311  228393 default_sa.go:55] duration metric: took 196.159398ms for default service account to be created ...
	I0717 19:04:46.925318  228393 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 19:04:47.121654  228393 request.go:628] Waited for 196.269002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0717 19:04:47.121713  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0717 19:04:47.121717  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:47.121725  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:47.121732  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:47.125107  228393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:04:47.125135  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:47.125142  228393 round_trippers.go:580]     Audit-Id: e0221b5e-ae48-4f89-999d-f35fdff61ba6
	I0717 19:04:47.125148  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:47.125154  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:47.125159  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:47.125165  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:47.125171  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:47 GMT
	I0717 19:04:47.125557  228393 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"445"},"items":[{"metadata":{"name":"coredns-5d78c9869d-98dl8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"0b962161-8aa7-48e3-bfab-c96b8fcdeb95","resourceVersion":"440","creationTimestamp":"2023-07-17T19:04:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ff6caf3c-f3bb-45e6-87e6-31a61699767c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:04:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ff6caf3c-f3bb-45e6-87e6-31a61699767c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55613 chars]
	I0717 19:04:47.127442  228393 system_pods.go:86] 8 kube-system pods found
	I0717 19:04:47.127463  228393 system_pods.go:89] "coredns-5d78c9869d-98dl8" [0b962161-8aa7-48e3-bfab-c96b8fcdeb95] Running
	I0717 19:04:47.127468  228393 system_pods.go:89] "etcd-multinode-549411" [b8bd6e94-7419-4088-922a-844632299e1c] Running
	I0717 19:04:47.127474  228393 system_pods.go:89] "kindnet-zjw42" [145336b3-84d1-459d-985a-f030ea0d3789] Running
	I0717 19:04:47.127478  228393 system_pods.go:89] "kube-apiserver-multinode-549411" [b26f076b-6354-45ef-b7c2-c8ff8b7dbc15] Running
	I0717 19:04:47.127483  228393 system_pods.go:89] "kube-controller-manager-multinode-549411" [f4c024ba-c455-4ab3-af54-817f307a1f1a] Running
	I0717 19:04:47.127487  228393 system_pods.go:89] "kube-proxy-hzb9w" [612c55b1-0ad0-4c37-80d1-931cdd2767aa] Running
	I0717 19:04:47.127490  228393 system_pods.go:89] "kube-scheduler-multinode-549411" [ec3f8f05-ca8c-40fe-b852-06306bfeb4f0] Running
	I0717 19:04:47.127494  228393 system_pods.go:89] "storage-provisioner" [405bbed8-a7bb-484e-b391-fc1e85d55700] Running
	I0717 19:04:47.127501  228393 system_pods.go:126] duration metric: took 202.178615ms to wait for k8s-apps to be running ...
	I0717 19:04:47.127509  228393 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 19:04:47.127555  228393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:04:47.138514  228393 system_svc.go:56] duration metric: took 10.997012ms WaitForService to wait for kubelet.
	I0717 19:04:47.138536  228393 kubeadm.go:581] duration metric: took 33.975282936s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 19:04:47.138560  228393 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:04:47.321991  228393 request.go:628] Waited for 183.33601ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0717 19:04:47.322065  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0717 19:04:47.322073  228393 round_trippers.go:469] Request Headers:
	I0717 19:04:47.322087  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:04:47.322101  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:04:47.324493  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:04:47.324515  228393 round_trippers.go:577] Response Headers:
	I0717 19:04:47.324523  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:04:47.324532  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:04:47.324538  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:04:47.324547  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:04:47 GMT
	I0717 19:04:47.324553  228393 round_trippers.go:580]     Audit-Id: e21b770d-357e-4714-914e-0201967ab77d
	I0717 19:04:47.324562  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:04:47.324659  228393 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"446"},"items":[{"metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"424","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6000 chars]
	I0717 19:04:47.325019  228393 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0717 19:04:47.325034  228393 node_conditions.go:123] node cpu capacity is 8
	I0717 19:04:47.325045  228393 node_conditions.go:105] duration metric: took 186.480812ms to run NodePressure ...
	I0717 19:04:47.325059  228393 start.go:228] waiting for startup goroutines ...
	I0717 19:04:47.325069  228393 start.go:233] waiting for cluster config update ...
	I0717 19:04:47.325079  228393 start.go:242] writing updated cluster config ...
	I0717 19:04:47.327613  228393 out.go:177] 
	I0717 19:04:47.329318  228393 config.go:182] Loaded profile config "multinode-549411": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:04:47.329402  228393 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/config.json ...
	I0717 19:04:47.331403  228393 out.go:177] * Starting worker node multinode-549411-m02 in cluster multinode-549411
	I0717 19:04:47.333005  228393 cache.go:122] Beginning downloading kic base image for docker with crio
	I0717 19:04:47.334456  228393 out.go:177] * Pulling base image ...
	I0717 19:04:47.335889  228393 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 19:04:47.335915  228393 cache.go:57] Caching tarball of preloaded images
	I0717 19:04:47.335996  228393 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 19:04:47.336014  228393 preload.go:174] Found /home/jenkins/minikube-integration/16890-138069/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 19:04:47.336022  228393 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 19:04:47.336101  228393 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/config.json ...
	I0717 19:04:47.352645  228393 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 19:04:47.352674  228393 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0717 19:04:47.352695  228393 cache.go:195] Successfully downloaded all kic artifacts
	I0717 19:04:47.352728  228393 start.go:365] acquiring machines lock for multinode-549411-m02: {Name:mk0b8af3812bef9dadd304fd581b3441d35edb94 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:04:47.352851  228393 start.go:369] acquired machines lock for "multinode-549411-m02" in 98.544µs
	I0717 19:04:47.352882  228393 start.go:93] Provisioning new machine with config: &{Name:multinode-549411 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-549411 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name:m02 IP: Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0717 19:04:47.352974  228393 start.go:125] createHost starting for "m02" (driver="docker")
	I0717 19:04:47.356775  228393 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0717 19:04:47.356897  228393 start.go:159] libmachine.API.Create for "multinode-549411" (driver="docker")
	I0717 19:04:47.356923  228393 client.go:168] LocalClient.Create starting
	I0717 19:04:47.356991  228393 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem
	I0717 19:04:47.357022  228393 main.go:141] libmachine: Decoding PEM data...
	I0717 19:04:47.357036  228393 main.go:141] libmachine: Parsing certificate...
	I0717 19:04:47.357088  228393 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem
	I0717 19:04:47.357108  228393 main.go:141] libmachine: Decoding PEM data...
	I0717 19:04:47.357119  228393 main.go:141] libmachine: Parsing certificate...
	I0717 19:04:47.357298  228393 cli_runner.go:164] Run: docker network inspect multinode-549411 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 19:04:47.373297  228393 network_create.go:76] Found existing network {name:multinode-549411 subnet:0xc0012f7890 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0717 19:04:47.373344  228393 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-549411-m02" container
	I0717 19:04:47.373411  228393 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 19:04:47.388579  228393 cli_runner.go:164] Run: docker volume create multinode-549411-m02 --label name.minikube.sigs.k8s.io=multinode-549411-m02 --label created_by.minikube.sigs.k8s.io=true
	I0717 19:04:47.405207  228393 oci.go:103] Successfully created a docker volume multinode-549411-m02
	I0717 19:04:47.405278  228393 cli_runner.go:164] Run: docker run --rm --name multinode-549411-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-549411-m02 --entrypoint /usr/bin/test -v multinode-549411-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0717 19:04:47.934887  228393 oci.go:107] Successfully prepared a docker volume multinode-549411-m02
	I0717 19:04:47.934940  228393 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 19:04:47.934967  228393 kic.go:190] Starting extracting preloaded images to volume ...
	I0717 19:04:47.935046  228393 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16890-138069/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-549411-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0717 19:04:52.818741  228393 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/16890-138069/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-549411-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.883647633s)
	I0717 19:04:52.818778  228393 kic.go:199] duration metric: took 4.883806 seconds to extract preloaded images to volume
	W0717 19:04:52.818917  228393 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0717 19:04:52.819010  228393 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 19:04:52.869586  228393 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-549411-m02 --name multinode-549411-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-549411-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-549411-m02 --network multinode-549411 --ip 192.168.58.3 --volume multinode-549411-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 19:04:53.179084  228393 cli_runner.go:164] Run: docker container inspect multinode-549411-m02 --format={{.State.Running}}
	I0717 19:04:53.196802  228393 cli_runner.go:164] Run: docker container inspect multinode-549411-m02 --format={{.State.Status}}
	I0717 19:04:53.214256  228393 cli_runner.go:164] Run: docker exec multinode-549411-m02 stat /var/lib/dpkg/alternatives/iptables
	I0717 19:04:53.276465  228393 oci.go:144] the created container "multinode-549411-m02" has a running status.
	I0717 19:04:53.276515  228393 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16890-138069/.minikube/machines/multinode-549411-m02/id_rsa...
	I0717 19:04:53.356682  228393 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/machines/multinode-549411-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0717 19:04:53.356727  228393 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16890-138069/.minikube/machines/multinode-549411-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 19:04:53.376720  228393 cli_runner.go:164] Run: docker container inspect multinode-549411-m02 --format={{.State.Status}}
	I0717 19:04:53.393176  228393 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 19:04:53.393202  228393 kic_runner.go:114] Args: [docker exec --privileged multinode-549411-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 19:04:53.450477  228393 cli_runner.go:164] Run: docker container inspect multinode-549411-m02 --format={{.State.Status}}
	I0717 19:04:53.468261  228393 machine.go:88] provisioning docker machine ...
	I0717 19:04:53.468314  228393 ubuntu.go:169] provisioning hostname "multinode-549411-m02"
	I0717 19:04:53.468391  228393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-549411-m02
	I0717 19:04:53.489478  228393 main.go:141] libmachine: Using SSH client type: native
	I0717 19:04:53.490098  228393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0717 19:04:53.490126  228393 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-549411-m02 && echo "multinode-549411-m02" | sudo tee /etc/hostname
	I0717 19:04:53.490820  228393 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53450->127.0.0.1:32852: read: connection reset by peer
	I0717 19:04:56.627078  228393 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-549411-m02
	
	I0717 19:04:56.627158  228393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-549411-m02
	I0717 19:04:56.644097  228393 main.go:141] libmachine: Using SSH client type: native
	I0717 19:04:56.644518  228393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0717 19:04:56.644538  228393 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-549411-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-549411-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-549411-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:04:56.772314  228393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:04:56.772343  228393 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16890-138069/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-138069/.minikube}
	I0717 19:04:56.772363  228393 ubuntu.go:177] setting up certificates
	I0717 19:04:56.772373  228393 provision.go:83] configureAuth start
	I0717 19:04:56.772438  228393 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-549411-m02
	I0717 19:04:56.788808  228393 provision.go:138] copyHostCerts
	I0717 19:04:56.788850  228393 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/16890-138069/.minikube/ca.pem
	I0717 19:04:56.788878  228393 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-138069/.minikube/ca.pem, removing ...
	I0717 19:04:56.788887  228393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-138069/.minikube/ca.pem
	I0717 19:04:56.788948  228393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-138069/.minikube/ca.pem (1078 bytes)
	I0717 19:04:56.789024  228393 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/16890-138069/.minikube/cert.pem
	I0717 19:04:56.789044  228393 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-138069/.minikube/cert.pem, removing ...
	I0717 19:04:56.789047  228393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-138069/.minikube/cert.pem
	I0717 19:04:56.789069  228393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-138069/.minikube/cert.pem (1123 bytes)
	I0717 19:04:56.789110  228393 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/16890-138069/.minikube/key.pem
	I0717 19:04:56.789136  228393 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-138069/.minikube/key.pem, removing ...
	I0717 19:04:56.789140  228393 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-138069/.minikube/key.pem
	I0717 19:04:56.789158  228393 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-138069/.minikube/key.pem (1675 bytes)
	I0717 19:04:56.789250  228393 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca-key.pem org=jenkins.multinode-549411-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-549411-m02]
	I0717 19:04:56.852569  228393 provision.go:172] copyRemoteCerts
	I0717 19:04:56.852648  228393 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:04:56.852684  228393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-549411-m02
	I0717 19:04:56.868907  228393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/multinode-549411-m02/id_rsa Username:docker}
	I0717 19:04:56.964694  228393 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 19:04:56.964761  228393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0717 19:04:56.986097  228393 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 19:04:56.986176  228393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 19:04:57.006661  228393 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 19:04:57.006721  228393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:04:57.027553  228393 provision.go:86] duration metric: configureAuth took 255.167152ms
	I0717 19:04:57.027578  228393 ubuntu.go:193] setting minikube options for container-runtime
	I0717 19:04:57.027754  228393 config.go:182] Loaded profile config "multinode-549411": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:04:57.027858  228393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-549411-m02
	I0717 19:04:57.043419  228393 main.go:141] libmachine: Using SSH client type: native
	I0717 19:04:57.043824  228393 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0717 19:04:57.043842  228393 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:04:57.254600  228393 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:04:57.254625  228393 machine.go:91] provisioned docker machine in 3.786334689s
	I0717 19:04:57.254634  228393 client.go:171] LocalClient.Create took 9.897706627s
	I0717 19:04:57.254654  228393 start.go:167] duration metric: libmachine.API.Create for "multinode-549411" took 9.897760189s
	I0717 19:04:57.254663  228393 start.go:300] post-start starting for "multinode-549411-m02" (driver="docker")
	I0717 19:04:57.254675  228393 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:04:57.254742  228393 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:04:57.254806  228393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-549411-m02
	I0717 19:04:57.271337  228393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/multinode-549411-m02/id_rsa Username:docker}
	I0717 19:04:57.365131  228393 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:04:57.368221  228393 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.2 LTS"
	I0717 19:04:57.368245  228393 command_runner.go:130] > NAME="Ubuntu"
	I0717 19:04:57.368251  228393 command_runner.go:130] > VERSION_ID="22.04"
	I0717 19:04:57.368256  228393 command_runner.go:130] > VERSION="22.04.2 LTS (Jammy Jellyfish)"
	I0717 19:04:57.368263  228393 command_runner.go:130] > VERSION_CODENAME=jammy
	I0717 19:04:57.368269  228393 command_runner.go:130] > ID=ubuntu
	I0717 19:04:57.368276  228393 command_runner.go:130] > ID_LIKE=debian
	I0717 19:04:57.368289  228393 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0717 19:04:57.368301  228393 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0717 19:04:57.368314  228393 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0717 19:04:57.368343  228393 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0717 19:04:57.368352  228393 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0717 19:04:57.368411  228393 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 19:04:57.368439  228393 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 19:04:57.368456  228393 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 19:04:57.368468  228393 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 19:04:57.368484  228393 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-138069/.minikube/addons for local assets ...
	I0717 19:04:57.368555  228393 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-138069/.minikube/files for local assets ...
	I0717 19:04:57.368644  228393 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem -> 1448222.pem in /etc/ssl/certs
	I0717 19:04:57.368662  228393 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem -> /etc/ssl/certs/1448222.pem
	I0717 19:04:57.368779  228393 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:04:57.376925  228393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem --> /etc/ssl/certs/1448222.pem (1708 bytes)
	I0717 19:04:57.398976  228393 start.go:303] post-start completed in 144.297113ms
	I0717 19:04:57.399316  228393 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-549411-m02
	I0717 19:04:57.415316  228393 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/config.json ...
	I0717 19:04:57.415612  228393 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 19:04:57.415662  228393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-549411-m02
	I0717 19:04:57.431743  228393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/multinode-549411-m02/id_rsa Username:docker}
	I0717 19:04:57.521061  228393 command_runner.go:130] > 26%!
	(MISSING)I0717 19:04:57.521431  228393 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 19:04:57.525880  228393 command_runner.go:130] > 218G
	I0717 19:04:57.525924  228393 start.go:128] duration metric: createHost completed in 10.17294078s
	I0717 19:04:57.525936  228393 start.go:83] releasing machines lock for "multinode-549411-m02", held for 10.173070484s
	I0717 19:04:57.526017  228393 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-549411-m02
	I0717 19:04:57.544172  228393 out.go:177] * Found network options:
	I0717 19:04:57.545932  228393 out.go:177]   - NO_PROXY=192.168.58.2
	W0717 19:04:57.547533  228393 proxy.go:119] fail to check proxy env: Error ip not in block
	W0717 19:04:57.547574  228393 proxy.go:119] fail to check proxy env: Error ip not in block
	I0717 19:04:57.547652  228393 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:04:57.547714  228393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-549411-m02
	I0717 19:04:57.547718  228393 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:04:57.547773  228393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-549411-m02
	I0717 19:04:57.564598  228393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/multinode-549411-m02/id_rsa Username:docker}
	I0717 19:04:57.564776  228393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/multinode-549411-m02/id_rsa Username:docker}
	I0717 19:04:57.789831  228393 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 19:04:57.789844  228393 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0717 19:04:57.794353  228393 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0717 19:04:57.794381  228393 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0717 19:04:57.794391  228393 command_runner.go:130] > Device: b0h/176d	Inode: 559415      Links: 1
	I0717 19:04:57.794401  228393 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 19:04:57.794410  228393 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0717 19:04:57.794451  228393 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0717 19:04:57.794469  228393 command_runner.go:130] > Change: 2023-07-17 18:45:35.686698013 +0000
	I0717 19:04:57.794477  228393 command_runner.go:130] >  Birth: 2023-07-17 18:45:35.686698013 +0000
	I0717 19:04:57.794589  228393 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:04:57.813511  228393 cni.go:227] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0717 19:04:57.813592  228393 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:04:57.840850  228393 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0717 19:04:57.840917  228393 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0717 19:04:57.840928  228393 start.go:469] detecting cgroup driver to use...
	I0717 19:04:57.840961  228393 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 19:04:57.841002  228393 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:04:57.854803  228393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:04:57.865204  228393 docker.go:196] disabling cri-docker service (if available) ...
	I0717 19:04:57.865270  228393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:04:57.877736  228393 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:04:57.889982  228393 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:04:57.972268  228393 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:04:58.047614  228393 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0717 19:04:58.047654  228393 docker.go:212] disabling docker service ...
	I0717 19:04:58.047708  228393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:04:58.065091  228393 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:04:58.075723  228393 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:04:58.147368  228393 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0717 19:04:58.147450  228393 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:04:58.224636  228393 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0717 19:04:58.224715  228393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:04:58.235052  228393 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:04:58.249806  228393 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0717 19:04:58.249860  228393 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:04:58.249921  228393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:04:58.259636  228393 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:04:58.259717  228393 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:04:58.268756  228393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:04:58.277538  228393 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:04:58.286475  228393 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:04:58.294943  228393 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:04:58.302753  228393 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0717 19:04:58.302838  228393 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:04:58.310505  228393 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:04:58.385610  228393 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:04:58.492353  228393 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:04:58.492426  228393 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:04:58.495750  228393 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0717 19:04:58.495775  228393 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0717 19:04:58.495784  228393 command_runner.go:130] > Device: b9h/185d	Inode: 186         Links: 1
	I0717 19:04:58.495794  228393 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 19:04:58.495801  228393 command_runner.go:130] > Access: 2023-07-17 19:04:58.478835885 +0000
	I0717 19:04:58.495818  228393 command_runner.go:130] > Modify: 2023-07-17 19:04:58.478835885 +0000
	I0717 19:04:58.495827  228393 command_runner.go:130] > Change: 2023-07-17 19:04:58.478835885 +0000
	I0717 19:04:58.495834  228393 command_runner.go:130] >  Birth: -
	I0717 19:04:58.495877  228393 start.go:537] Will wait 60s for crictl version
	I0717 19:04:58.495924  228393 ssh_runner.go:195] Run: which crictl
	I0717 19:04:58.498973  228393 command_runner.go:130] > /usr/bin/crictl
	I0717 19:04:58.499045  228393 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:04:58.531007  228393 command_runner.go:130] > Version:  0.1.0
	I0717 19:04:58.531030  228393 command_runner.go:130] > RuntimeName:  cri-o
	I0717 19:04:58.531036  228393 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0717 19:04:58.531043  228393 command_runner.go:130] > RuntimeApiVersion:  v1
	I0717 19:04:58.533072  228393 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0717 19:04:58.533147  228393 ssh_runner.go:195] Run: crio --version
	I0717 19:04:58.566099  228393 command_runner.go:130] > crio version 1.24.6
	I0717 19:04:58.566119  228393 command_runner.go:130] > Version:          1.24.6
	I0717 19:04:58.566129  228393 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0717 19:04:58.566133  228393 command_runner.go:130] > GitTreeState:     clean
	I0717 19:04:58.566141  228393 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0717 19:04:58.566146  228393 command_runner.go:130] > GoVersion:        go1.18.2
	I0717 19:04:58.566155  228393 command_runner.go:130] > Compiler:         gc
	I0717 19:04:58.566161  228393 command_runner.go:130] > Platform:         linux/amd64
	I0717 19:04:58.566169  228393 command_runner.go:130] > Linkmode:         dynamic
	I0717 19:04:58.566179  228393 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0717 19:04:58.566186  228393 command_runner.go:130] > SeccompEnabled:   true
	I0717 19:04:58.566192  228393 command_runner.go:130] > AppArmorEnabled:  false
	I0717 19:04:58.566264  228393 ssh_runner.go:195] Run: crio --version
	I0717 19:04:58.597350  228393 command_runner.go:130] > crio version 1.24.6
	I0717 19:04:58.597370  228393 command_runner.go:130] > Version:          1.24.6
	I0717 19:04:58.597377  228393 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0717 19:04:58.597381  228393 command_runner.go:130] > GitTreeState:     clean
	I0717 19:04:58.597387  228393 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0717 19:04:58.597392  228393 command_runner.go:130] > GoVersion:        go1.18.2
	I0717 19:04:58.597396  228393 command_runner.go:130] > Compiler:         gc
	I0717 19:04:58.597400  228393 command_runner.go:130] > Platform:         linux/amd64
	I0717 19:04:58.597405  228393 command_runner.go:130] > Linkmode:         dynamic
	I0717 19:04:58.597412  228393 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0717 19:04:58.597416  228393 command_runner.go:130] > SeccompEnabled:   true
	I0717 19:04:58.597420  228393 command_runner.go:130] > AppArmorEnabled:  false
	I0717 19:04:58.602175  228393 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.6 ...
	I0717 19:04:58.603732  228393 out.go:177]   - env NO_PROXY=192.168.58.2
	I0717 19:04:58.605117  228393 cli_runner.go:164] Run: docker network inspect multinode-549411 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 19:04:58.621805  228393 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0717 19:04:58.625538  228393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:04:58.635898  228393 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411 for IP: 192.168.58.3
	I0717 19:04:58.635928  228393 certs.go:190] acquiring lock for shared ca certs: {Name:mk42196ce59710ebf500640671660e2f4656c84e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:04:58.636081  228393 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16890-138069/.minikube/ca.key
	I0717 19:04:58.636120  228393 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16890-138069/.minikube/proxy-client-ca.key
	I0717 19:04:58.636134  228393 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 19:04:58.636149  228393 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 19:04:58.636161  228393 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 19:04:58.636175  228393 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 19:04:58.636230  228393 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/144822.pem (1338 bytes)
	W0717 19:04:58.636259  228393 certs.go:433] ignoring /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/144822_empty.pem, impossibly tiny 0 bytes
	I0717 19:04:58.636270  228393 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:04:58.636297  228393 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem (1078 bytes)
	I0717 19:04:58.636321  228393 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:04:58.636344  228393 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/key.pem (1675 bytes)
	I0717 19:04:58.636383  228393 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem (1708 bytes)
	I0717 19:04:58.636409  228393 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:04:58.636421  228393 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/144822.pem -> /usr/share/ca-certificates/144822.pem
	I0717 19:04:58.636434  228393 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem -> /usr/share/ca-certificates/1448222.pem
	I0717 19:04:58.636759  228393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:04:58.658974  228393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 19:04:58.679898  228393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:04:58.700969  228393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 19:04:58.722013  228393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:04:58.743313  228393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/certs/144822.pem --> /usr/share/ca-certificates/144822.pem (1338 bytes)
	I0717 19:04:58.764823  228393 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem --> /usr/share/ca-certificates/1448222.pem (1708 bytes)
	I0717 19:04:58.785588  228393 ssh_runner.go:195] Run: openssl version
	I0717 19:04:58.790710  228393 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0717 19:04:58.790794  228393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:04:58.799263  228393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:04:58.802593  228393 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 17 18:46 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:04:58.802626  228393 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:46 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:04:58.802678  228393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:04:58.809751  228393 command_runner.go:130] > b5213941
	I0717 19:04:58.809930  228393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:04:58.818440  228393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144822.pem && ln -fs /usr/share/ca-certificates/144822.pem /etc/ssl/certs/144822.pem"
	I0717 19:04:58.826872  228393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144822.pem
	I0717 19:04:58.830521  228393 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 17 18:51 /usr/share/ca-certificates/144822.pem
	I0717 19:04:58.830569  228393 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:51 /usr/share/ca-certificates/144822.pem
	I0717 19:04:58.830600  228393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144822.pem
	I0717 19:04:58.836617  228393 command_runner.go:130] > 51391683
	I0717 19:04:58.836877  228393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144822.pem /etc/ssl/certs/51391683.0"
	I0717 19:04:58.845528  228393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1448222.pem && ln -fs /usr/share/ca-certificates/1448222.pem /etc/ssl/certs/1448222.pem"
	I0717 19:04:58.853872  228393 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1448222.pem
	I0717 19:04:58.856977  228393 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 17 18:51 /usr/share/ca-certificates/1448222.pem
	I0717 19:04:58.857042  228393 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:51 /usr/share/ca-certificates/1448222.pem
	I0717 19:04:58.857077  228393 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1448222.pem
	I0717 19:04:58.863475  228393 command_runner.go:130] > 3ec20f2e
	I0717 19:04:58.863527  228393 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1448222.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:04:58.871921  228393 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 19:04:58.874956  228393 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 19:04:58.874996  228393 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 19:04:58.875072  228393 ssh_runner.go:195] Run: crio config
	I0717 19:04:58.912610  228393 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0717 19:04:58.912646  228393 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0717 19:04:58.912654  228393 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0717 19:04:58.912657  228393 command_runner.go:130] > #
	I0717 19:04:58.912665  228393 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0717 19:04:58.912671  228393 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0717 19:04:58.912676  228393 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0717 19:04:58.912683  228393 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0717 19:04:58.912688  228393 command_runner.go:130] > # reload'.
	I0717 19:04:58.912697  228393 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0717 19:04:58.912707  228393 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0717 19:04:58.912721  228393 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0717 19:04:58.912736  228393 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0717 19:04:58.912742  228393 command_runner.go:130] > [crio]
	I0717 19:04:58.912751  228393 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0717 19:04:58.912766  228393 command_runner.go:130] > # containers images, in this directory.
	I0717 19:04:58.912777  228393 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0717 19:04:58.912787  228393 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0717 19:04:58.912847  228393 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0717 19:04:58.912868  228393 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0717 19:04:58.912886  228393 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0717 19:04:58.912901  228393 command_runner.go:130] > # storage_driver = "vfs"
	I0717 19:04:58.912937  228393 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0717 19:04:58.912968  228393 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0717 19:04:58.912986  228393 command_runner.go:130] > # storage_option = [
	I0717 19:04:58.913000  228393 command_runner.go:130] > # ]
	I0717 19:04:58.913018  228393 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0717 19:04:58.913049  228393 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0717 19:04:58.913076  228393 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0717 19:04:58.913108  228393 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0717 19:04:58.913129  228393 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0717 19:04:58.913159  228393 command_runner.go:130] > # always happen on a node reboot
	I0717 19:04:58.913190  228393 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0717 19:04:58.913212  228393 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0717 19:04:58.913232  228393 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0717 19:04:58.913272  228393 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0717 19:04:58.913301  228393 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0717 19:04:58.913325  228393 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0717 19:04:58.913350  228393 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0717 19:04:58.913386  228393 command_runner.go:130] > # internal_wipe = true
	I0717 19:04:58.913401  228393 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0717 19:04:58.913416  228393 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0717 19:04:58.913425  228393 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0717 19:04:58.913435  228393 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0717 19:04:58.913450  228393 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0717 19:04:58.913456  228393 command_runner.go:130] > [crio.api]
	I0717 19:04:58.913465  228393 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0717 19:04:58.913475  228393 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0717 19:04:58.913488  228393 command_runner.go:130] > # IP address on which the stream server will listen.
	I0717 19:04:58.913498  228393 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0717 19:04:58.913509  228393 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0717 19:04:58.913516  228393 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0717 19:04:58.913554  228393 command_runner.go:130] > # stream_port = "0"
	I0717 19:04:58.913565  228393 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0717 19:04:58.913573  228393 command_runner.go:130] > # stream_enable_tls = false
	I0717 19:04:58.913589  228393 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0717 19:04:58.913598  228393 command_runner.go:130] > # stream_idle_timeout = ""
	I0717 19:04:58.913608  228393 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0717 19:04:58.913629  228393 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0717 19:04:58.913639  228393 command_runner.go:130] > # minutes.
	I0717 19:04:58.913645  228393 command_runner.go:130] > # stream_tls_cert = ""
	I0717 19:04:58.913655  228393 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0717 19:04:58.913669  228393 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0717 19:04:58.913679  228393 command_runner.go:130] > # stream_tls_key = ""
	I0717 19:04:58.913692  228393 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0717 19:04:58.913701  228393 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0717 19:04:58.913707  228393 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0717 19:04:58.913711  228393 command_runner.go:130] > # stream_tls_ca = ""
	I0717 19:04:58.913726  228393 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0717 19:04:58.913741  228393 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0717 19:04:58.913756  228393 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0717 19:04:58.913766  228393 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0717 19:04:58.913835  228393 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0717 19:04:58.913849  228393 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0717 19:04:58.913855  228393 command_runner.go:130] > [crio.runtime]
	I0717 19:04:58.913865  228393 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0717 19:04:58.913878  228393 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0717 19:04:58.913886  228393 command_runner.go:130] > # "nofile=1024:2048"
	I0717 19:04:58.913900  228393 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0717 19:04:58.913912  228393 command_runner.go:130] > # default_ulimits = [
	I0717 19:04:58.913918  228393 command_runner.go:130] > # ]
	I0717 19:04:58.913931  228393 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0717 19:04:58.913940  228393 command_runner.go:130] > # no_pivot = false
	I0717 19:04:58.913947  228393 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0717 19:04:58.913960  228393 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0717 19:04:58.913972  228393 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0717 19:04:58.913987  228393 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0717 19:04:58.913998  228393 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0717 19:04:58.914012  228393 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 19:04:58.914021  228393 command_runner.go:130] > # conmon = ""
	I0717 19:04:58.914029  228393 command_runner.go:130] > # Cgroup setting for conmon
	I0717 19:04:58.914039  228393 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0717 19:04:58.914045  228393 command_runner.go:130] > conmon_cgroup = "pod"
	I0717 19:04:58.914055  228393 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0717 19:04:58.914064  228393 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0717 19:04:58.914075  228393 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0717 19:04:58.914082  228393 command_runner.go:130] > # conmon_env = [
	I0717 19:04:58.914093  228393 command_runner.go:130] > # ]
	I0717 19:04:58.914103  228393 command_runner.go:130] > # Additional environment variables to set for all the
	I0717 19:04:58.914111  228393 command_runner.go:130] > # containers. These are overridden if set in the
	I0717 19:04:58.914121  228393 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0717 19:04:58.914129  228393 command_runner.go:130] > # default_env = [
	I0717 19:04:58.914138  228393 command_runner.go:130] > # ]
	I0717 19:04:58.914147  228393 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0717 19:04:58.914154  228393 command_runner.go:130] > # selinux = false
	I0717 19:04:58.914169  228393 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0717 19:04:58.914183  228393 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0717 19:04:58.914197  228393 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0717 19:04:58.914206  228393 command_runner.go:130] > # seccomp_profile = ""
	I0717 19:04:58.914217  228393 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0717 19:04:58.914230  228393 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0717 19:04:58.914243  228393 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0717 19:04:58.914253  228393 command_runner.go:130] > # which might increase security.
	I0717 19:04:58.914263  228393 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0717 19:04:58.914277  228393 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0717 19:04:58.914290  228393 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0717 19:04:58.914304  228393 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0717 19:04:58.914317  228393 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0717 19:04:58.914328  228393 command_runner.go:130] > # This option supports live configuration reload.
	I0717 19:04:58.914339  228393 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0717 19:04:58.914354  228393 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0717 19:04:58.914365  228393 command_runner.go:130] > # the cgroup blockio controller.
	I0717 19:04:58.914376  228393 command_runner.go:130] > # blockio_config_file = ""
	I0717 19:04:58.914392  228393 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0717 19:04:58.914402  228393 command_runner.go:130] > # irqbalance daemon.
	I0717 19:04:58.914411  228393 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0717 19:04:58.914426  228393 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0717 19:04:58.914435  228393 command_runner.go:130] > # This option supports live configuration reload.
	I0717 19:04:58.914442  228393 command_runner.go:130] > # rdt_config_file = ""
	I0717 19:04:58.914454  228393 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0717 19:04:58.914464  228393 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0717 19:04:58.914481  228393 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0717 19:04:58.914518  228393 command_runner.go:130] > # separate_pull_cgroup = ""
	I0717 19:04:58.914532  228393 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0717 19:04:58.914543  228393 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0717 19:04:58.914564  228393 command_runner.go:130] > # will be added.
	I0717 19:04:58.914571  228393 command_runner.go:130] > # default_capabilities = [
	I0717 19:04:58.914585  228393 command_runner.go:130] > # 	"CHOWN",
	I0717 19:04:58.914593  228393 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0717 19:04:58.914603  228393 command_runner.go:130] > # 	"FSETID",
	I0717 19:04:58.914610  228393 command_runner.go:130] > # 	"FOWNER",
	I0717 19:04:58.914619  228393 command_runner.go:130] > # 	"SETGID",
	I0717 19:04:58.914625  228393 command_runner.go:130] > # 	"SETUID",
	I0717 19:04:58.914635  228393 command_runner.go:130] > # 	"SETPCAP",
	I0717 19:04:58.914641  228393 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0717 19:04:58.914649  228393 command_runner.go:130] > # 	"KILL",
	I0717 19:04:58.914654  228393 command_runner.go:130] > # ]
	I0717 19:04:58.914661  228393 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0717 19:04:58.914673  228393 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0717 19:04:58.914683  228393 command_runner.go:130] > # add_inheritable_capabilities = true
	I0717 19:04:58.914698  228393 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0717 19:04:58.914711  228393 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 19:04:58.914726  228393 command_runner.go:130] > # default_sysctls = [
	I0717 19:04:58.914732  228393 command_runner.go:130] > # ]
	I0717 19:04:58.914743  228393 command_runner.go:130] > # List of devices on the host that a
	I0717 19:04:58.914752  228393 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0717 19:04:58.914762  228393 command_runner.go:130] > # allowed_devices = [
	I0717 19:04:58.914769  228393 command_runner.go:130] > # 	"/dev/fuse",
	I0717 19:04:58.914774  228393 command_runner.go:130] > # ]
	I0717 19:04:58.914783  228393 command_runner.go:130] > # List of additional devices. specified as
	I0717 19:04:58.914864  228393 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0717 19:04:58.914898  228393 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0717 19:04:58.914913  228393 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0717 19:04:58.914919  228393 command_runner.go:130] > # additional_devices = [
	I0717 19:04:58.914928  228393 command_runner.go:130] > # ]
	I0717 19:04:58.914936  228393 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0717 19:04:58.914945  228393 command_runner.go:130] > # cdi_spec_dirs = [
	I0717 19:04:58.914953  228393 command_runner.go:130] > # 	"/etc/cdi",
	I0717 19:04:58.914964  228393 command_runner.go:130] > # 	"/var/run/cdi",
	I0717 19:04:58.914969  228393 command_runner.go:130] > # ]
	I0717 19:04:58.914980  228393 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0717 19:04:58.914994  228393 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0717 19:04:58.915004  228393 command_runner.go:130] > # Defaults to false.
	I0717 19:04:58.915013  228393 command_runner.go:130] > # device_ownership_from_security_context = false
	I0717 19:04:58.915027  228393 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0717 19:04:58.915042  228393 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0717 19:04:58.915051  228393 command_runner.go:130] > # hooks_dir = [
	I0717 19:04:58.915058  228393 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0717 19:04:58.915062  228393 command_runner.go:130] > # ]
	I0717 19:04:58.915072  228393 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0717 19:04:58.915087  228393 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0717 19:04:58.915099  228393 command_runner.go:130] > # its default mounts from the following two files:
	I0717 19:04:58.915107  228393 command_runner.go:130] > #
	I0717 19:04:58.915118  228393 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0717 19:04:58.915132  228393 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0717 19:04:58.915144  228393 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0717 19:04:58.915148  228393 command_runner.go:130] > #
	I0717 19:04:58.915159  228393 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0717 19:04:58.915173  228393 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0717 19:04:58.915188  228393 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0717 19:04:58.915200  228393 command_runner.go:130] > #      only add mounts it finds in this file.
	I0717 19:04:58.915209  228393 command_runner.go:130] > #
	I0717 19:04:58.915221  228393 command_runner.go:130] > # default_mounts_file = ""
	I0717 19:04:58.915232  228393 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0717 19:04:58.915249  228393 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0717 19:04:58.915263  228393 command_runner.go:130] > # pids_limit = 0
	I0717 19:04:58.915278  228393 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0717 19:04:58.915292  228393 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0717 19:04:58.915306  228393 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0717 19:04:58.915320  228393 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0717 19:04:58.915327  228393 command_runner.go:130] > # log_size_max = -1
	I0717 19:04:58.915339  228393 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0717 19:04:58.915350  228393 command_runner.go:130] > # log_to_journald = false
	I0717 19:04:58.915364  228393 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0717 19:04:58.915376  228393 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0717 19:04:58.915387  228393 command_runner.go:130] > # Path to directory for container attach sockets.
	I0717 19:04:58.915399  228393 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0717 19:04:58.915408  228393 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0717 19:04:58.915415  228393 command_runner.go:130] > # bind_mount_prefix = ""
	I0717 19:04:58.915428  228393 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0717 19:04:58.915439  228393 command_runner.go:130] > # read_only = false
	I0717 19:04:58.915453  228393 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0717 19:04:58.915467  228393 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0717 19:04:58.915477  228393 command_runner.go:130] > # live configuration reload.
	I0717 19:04:58.915486  228393 command_runner.go:130] > # log_level = "info"
	I0717 19:04:58.915494  228393 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0717 19:04:58.915505  228393 command_runner.go:130] > # This option supports live configuration reload.
	I0717 19:04:58.915515  228393 command_runner.go:130] > # log_filter = ""
	I0717 19:04:58.915529  228393 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0717 19:04:58.915542  228393 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0717 19:04:58.915552  228393 command_runner.go:130] > # separated by comma.
	I0717 19:04:58.915562  228393 command_runner.go:130] > # uid_mappings = ""
	I0717 19:04:58.915572  228393 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0717 19:04:58.915588  228393 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0717 19:04:58.915598  228393 command_runner.go:130] > # separated by comma.
	I0717 19:04:58.915608  228393 command_runner.go:130] > # gid_mappings = ""
	I0717 19:04:58.915622  228393 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0717 19:04:58.915636  228393 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 19:04:58.915652  228393 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 19:04:58.915662  228393 command_runner.go:130] > # minimum_mappable_uid = -1
	I0717 19:04:58.915670  228393 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0717 19:04:58.915684  228393 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0717 19:04:58.915698  228393 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0717 19:04:58.915709  228393 command_runner.go:130] > # minimum_mappable_gid = -1
	I0717 19:04:58.915725  228393 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0717 19:04:58.915739  228393 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0717 19:04:58.915750  228393 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0717 19:04:58.915757  228393 command_runner.go:130] > # ctr_stop_timeout = 30
	I0717 19:04:58.915767  228393 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0717 19:04:58.915813  228393 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0717 19:04:58.915826  228393 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0717 19:04:58.915837  228393 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0717 19:04:58.915848  228393 command_runner.go:130] > # drop_infra_ctr = true
	I0717 19:04:58.915862  228393 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0717 19:04:58.915875  228393 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0717 19:04:58.915896  228393 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0717 19:04:58.915906  228393 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0717 19:04:58.915920  228393 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0717 19:04:58.915932  228393 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0717 19:04:58.915943  228393 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0717 19:04:58.915958  228393 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0717 19:04:58.915965  228393 command_runner.go:130] > # pinns_path = ""
	I0717 19:04:58.915994  228393 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0717 19:04:58.916007  228393 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0717 19:04:58.916021  228393 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0717 19:04:58.916033  228393 command_runner.go:130] > # default_runtime = "runc"
	I0717 19:04:58.916045  228393 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0717 19:04:58.916059  228393 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0717 19:04:58.916078  228393 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0717 19:04:58.916090  228393 command_runner.go:130] > # creation as a file is not desired either.
	I0717 19:04:58.916106  228393 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0717 19:04:58.916117  228393 command_runner.go:130] > # the hostname is being managed dynamically.
	I0717 19:04:58.916128  228393 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0717 19:04:58.916137  228393 command_runner.go:130] > # ]
	I0717 19:04:58.916153  228393 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0717 19:04:58.916166  228393 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0717 19:04:58.916180  228393 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0717 19:04:58.916193  228393 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0717 19:04:58.916201  228393 command_runner.go:130] > #
	I0717 19:04:58.916212  228393 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0717 19:04:58.916222  228393 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0717 19:04:58.916229  228393 command_runner.go:130] > #  runtime_type = "oci"
	I0717 19:04:58.916240  228393 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0717 19:04:58.916251  228393 command_runner.go:130] > #  privileged_without_host_devices = false
	I0717 19:04:58.916262  228393 command_runner.go:130] > #  allowed_annotations = []
	I0717 19:04:58.916272  228393 command_runner.go:130] > # Where:
	I0717 19:04:58.916283  228393 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0717 19:04:58.916296  228393 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0717 19:04:58.916309  228393 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0717 19:04:58.916323  228393 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0717 19:04:58.916333  228393 command_runner.go:130] > #   in $PATH.
	I0717 19:04:58.916346  228393 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0717 19:04:58.916357  228393 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0717 19:04:58.916371  228393 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0717 19:04:58.916380  228393 command_runner.go:130] > #   state.
	I0717 19:04:58.916393  228393 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0717 19:04:58.916406  228393 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0717 19:04:58.916419  228393 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0717 19:04:58.916431  228393 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0717 19:04:58.916444  228393 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0717 19:04:58.916458  228393 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0717 19:04:58.916469  228393 command_runner.go:130] > #   The currently recognized values are:
	I0717 19:04:58.916483  228393 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0717 19:04:58.916497  228393 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0717 19:04:58.916514  228393 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0717 19:04:58.916528  228393 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0717 19:04:58.916550  228393 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0717 19:04:58.916560  228393 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0717 19:04:58.916570  228393 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0717 19:04:58.916587  228393 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0717 19:04:58.916596  228393 command_runner.go:130] > #   should be moved to the container's cgroup
	I0717 19:04:58.916603  228393 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0717 19:04:58.916611  228393 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0717 19:04:58.916618  228393 command_runner.go:130] > runtime_type = "oci"
	I0717 19:04:58.916625  228393 command_runner.go:130] > runtime_root = "/run/runc"
	I0717 19:04:58.916631  228393 command_runner.go:130] > runtime_config_path = ""
	I0717 19:04:58.916641  228393 command_runner.go:130] > monitor_path = ""
	I0717 19:04:58.916648  228393 command_runner.go:130] > monitor_cgroup = ""
	I0717 19:04:58.916658  228393 command_runner.go:130] > monitor_exec_cgroup = ""
	I0717 19:04:58.916728  228393 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0717 19:04:58.916738  228393 command_runner.go:130] > # running containers
	I0717 19:04:58.916744  228393 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0717 19:04:58.916756  228393 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0717 19:04:58.916769  228393 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0717 19:04:58.916780  228393 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0717 19:04:58.916791  228393 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0717 19:04:58.916801  228393 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0717 19:04:58.916811  228393 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0717 19:04:58.916821  228393 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0717 19:04:58.916830  228393 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0717 19:04:58.916840  228393 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0717 19:04:58.916852  228393 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0717 19:04:58.916862  228393 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0717 19:04:58.916875  228393 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0717 19:04:58.916889  228393 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0717 19:04:58.916904  228393 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0717 19:04:58.916918  228393 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0717 19:04:58.916935  228393 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0717 19:04:58.916949  228393 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0717 19:04:58.916960  228393 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0717 19:04:58.916974  228393 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0717 19:04:58.916983  228393 command_runner.go:130] > # Example:
	I0717 19:04:58.916995  228393 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0717 19:04:58.917006  228393 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0717 19:04:58.917017  228393 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0717 19:04:58.917027  228393 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0717 19:04:58.917037  228393 command_runner.go:130] > # cpuset = 0
	I0717 19:04:58.917046  228393 command_runner.go:130] > # cpushares = "0-1"
	I0717 19:04:58.917054  228393 command_runner.go:130] > # Where:
	I0717 19:04:58.917064  228393 command_runner.go:130] > # The workload name is workload-type.
	I0717 19:04:58.917077  228393 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0717 19:04:58.917089  228393 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0717 19:04:58.917101  228393 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0717 19:04:58.917115  228393 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0717 19:04:58.917126  228393 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0717 19:04:58.917134  228393 command_runner.go:130] > # 
	I0717 19:04:58.917147  228393 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0717 19:04:58.917156  228393 command_runner.go:130] > #
	I0717 19:04:58.917174  228393 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0717 19:04:58.917187  228393 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0717 19:04:58.917200  228393 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0717 19:04:58.917213  228393 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0717 19:04:58.917225  228393 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0717 19:04:58.917235  228393 command_runner.go:130] > [crio.image]
	I0717 19:04:58.917246  228393 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0717 19:04:58.917256  228393 command_runner.go:130] > # default_transport = "docker://"
	I0717 19:04:58.917265  228393 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0717 19:04:58.917277  228393 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0717 19:04:58.917285  228393 command_runner.go:130] > # global_auth_file = ""
	I0717 19:04:58.917296  228393 command_runner.go:130] > # The image used to instantiate infra containers.
	I0717 19:04:58.917306  228393 command_runner.go:130] > # This option supports live configuration reload.
	I0717 19:04:58.917317  228393 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0717 19:04:58.917330  228393 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0717 19:04:58.917344  228393 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0717 19:04:58.917356  228393 command_runner.go:130] > # This option supports live configuration reload.
	I0717 19:04:58.917367  228393 command_runner.go:130] > # pause_image_auth_file = ""
	I0717 19:04:58.917378  228393 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0717 19:04:58.917390  228393 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0717 19:04:58.917402  228393 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0717 19:04:58.917412  228393 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0717 19:04:58.917421  228393 command_runner.go:130] > # pause_command = "/pause"
	I0717 19:04:58.917433  228393 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0717 19:04:58.917447  228393 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0717 19:04:58.917460  228393 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0717 19:04:58.917473  228393 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0717 19:04:58.917484  228393 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0717 19:04:58.917494  228393 command_runner.go:130] > # signature_policy = ""
	I0717 19:04:58.917537  228393 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0717 19:04:58.917550  228393 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0717 19:04:58.917559  228393 command_runner.go:130] > # changing them here.
	I0717 19:04:58.917568  228393 command_runner.go:130] > # insecure_registries = [
	I0717 19:04:58.917573  228393 command_runner.go:130] > # ]
	I0717 19:04:58.917592  228393 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0717 19:04:58.917602  228393 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0717 19:04:58.917611  228393 command_runner.go:130] > # image_volumes = "mkdir"
	I0717 19:04:58.917619  228393 command_runner.go:130] > # Temporary directory to use for storing big files
	I0717 19:04:58.917628  228393 command_runner.go:130] > # big_files_temporary_dir = ""
	I0717 19:04:58.917637  228393 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0717 19:04:58.917646  228393 command_runner.go:130] > # CNI plugins.
	I0717 19:04:58.917655  228393 command_runner.go:130] > [crio.network]
	I0717 19:04:58.917667  228393 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0717 19:04:58.917678  228393 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0717 19:04:58.917687  228393 command_runner.go:130] > # cni_default_network = ""
	I0717 19:04:58.917699  228393 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0717 19:04:58.917709  228393 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0717 19:04:58.917721  228393 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0717 19:04:58.917730  228393 command_runner.go:130] > # plugin_dirs = [
	I0717 19:04:58.917739  228393 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0717 19:04:58.917747  228393 command_runner.go:130] > # ]
	I0717 19:04:58.917759  228393 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0717 19:04:58.917767  228393 command_runner.go:130] > [crio.metrics]
	I0717 19:04:58.917778  228393 command_runner.go:130] > # Globally enable or disable metrics support.
	I0717 19:04:58.917787  228393 command_runner.go:130] > # enable_metrics = false
	I0717 19:04:58.917798  228393 command_runner.go:130] > # Specify enabled metrics collectors.
	I0717 19:04:58.917809  228393 command_runner.go:130] > # Per default all metrics are enabled.
	I0717 19:04:58.917822  228393 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0717 19:04:58.917835  228393 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0717 19:04:58.917847  228393 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0717 19:04:58.917858  228393 command_runner.go:130] > # metrics_collectors = [
	I0717 19:04:58.917868  228393 command_runner.go:130] > # 	"operations",
	I0717 19:04:58.917879  228393 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0717 19:04:58.917889  228393 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0717 19:04:58.917899  228393 command_runner.go:130] > # 	"operations_errors",
	I0717 19:04:58.917909  228393 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0717 19:04:58.917918  228393 command_runner.go:130] > # 	"image_pulls_by_name",
	I0717 19:04:58.917927  228393 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0717 19:04:58.917937  228393 command_runner.go:130] > # 	"image_pulls_failures",
	I0717 19:04:58.917947  228393 command_runner.go:130] > # 	"image_pulls_successes",
	I0717 19:04:58.917956  228393 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0717 19:04:58.917962  228393 command_runner.go:130] > # 	"image_layer_reuse",
	I0717 19:04:58.917971  228393 command_runner.go:130] > # 	"containers_oom_total",
	I0717 19:04:58.917979  228393 command_runner.go:130] > # 	"containers_oom",
	I0717 19:04:58.917989  228393 command_runner.go:130] > # 	"processes_defunct",
	I0717 19:04:58.917997  228393 command_runner.go:130] > # 	"operations_total",
	I0717 19:04:58.918008  228393 command_runner.go:130] > # 	"operations_latency_seconds",
	I0717 19:04:58.918019  228393 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0717 19:04:58.918029  228393 command_runner.go:130] > # 	"operations_errors_total",
	I0717 19:04:58.918040  228393 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0717 19:04:58.918051  228393 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0717 19:04:58.918061  228393 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0717 19:04:58.918072  228393 command_runner.go:130] > # 	"image_pulls_success_total",
	I0717 19:04:58.918083  228393 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0717 19:04:58.918092  228393 command_runner.go:130] > # 	"containers_oom_count_total",
	I0717 19:04:58.918100  228393 command_runner.go:130] > # ]
	I0717 19:04:58.918111  228393 command_runner.go:130] > # The port on which the metrics server will listen.
	I0717 19:04:58.918120  228393 command_runner.go:130] > # metrics_port = 9090
	I0717 19:04:58.918130  228393 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0717 19:04:58.918139  228393 command_runner.go:130] > # metrics_socket = ""
	I0717 19:04:58.918150  228393 command_runner.go:130] > # The certificate for the secure metrics server.
	I0717 19:04:58.918162  228393 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0717 19:04:58.918174  228393 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0717 19:04:58.918184  228393 command_runner.go:130] > # certificate on any modification event.
	I0717 19:04:58.918193  228393 command_runner.go:130] > # metrics_cert = ""
	I0717 19:04:58.918204  228393 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0717 19:04:58.918217  228393 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0717 19:04:58.918225  228393 command_runner.go:130] > # metrics_key = ""
	I0717 19:04:58.918233  228393 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0717 19:04:58.918241  228393 command_runner.go:130] > [crio.tracing]
	I0717 19:04:58.918253  228393 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0717 19:04:58.918262  228393 command_runner.go:130] > # enable_tracing = false
	I0717 19:04:58.918273  228393 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0717 19:04:58.918282  228393 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0717 19:04:58.918292  228393 command_runner.go:130] > # Number of samples to collect per million spans.
	I0717 19:04:58.918301  228393 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0717 19:04:58.918313  228393 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0717 19:04:58.918321  228393 command_runner.go:130] > [crio.stats]
	I0717 19:04:58.918332  228393 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0717 19:04:58.918344  228393 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0717 19:04:58.918354  228393 command_runner.go:130] > # stats_collection_period = 0
	I0717 19:04:58.918672  228393 command_runner.go:130] ! time="2023-07-17 19:04:58.909928079Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0717 19:04:58.918702  228393 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0717 19:04:58.918764  228393 cni.go:84] Creating CNI manager for ""
	I0717 19:04:58.918774  228393 cni.go:137] 2 nodes found, recommending kindnet
	I0717 19:04:58.918787  228393 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 19:04:58.918809  228393 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-549411 NodeName:multinode-549411-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:04:58.918919  228393 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-549411-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:04:58.918963  228393 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-549411-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:multinode-549411 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 19:04:58.919013  228393 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 19:04:58.926535  228393 command_runner.go:130] > kubeadm
	I0717 19:04:58.926551  228393 command_runner.go:130] > kubectl
	I0717 19:04:58.926555  228393 command_runner.go:130] > kubelet
	I0717 19:04:58.927157  228393 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:04:58.927212  228393 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0717 19:04:58.935031  228393 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0717 19:04:58.950952  228393 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:04:58.966772  228393 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0717 19:04:58.969880  228393 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:04:58.979836  228393 host.go:66] Checking if "multinode-549411" exists ...
	I0717 19:04:58.980151  228393 config.go:182] Loaded profile config "multinode-549411": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:04:58.980100  228393 start.go:304] JoinCluster: &{Name:multinode-549411 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-549411 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:04:58.980183  228393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0717 19:04:58.980234  228393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-549411
	I0717 19:04:58.997242  228393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/multinode-549411/id_rsa Username:docker}
	I0717 19:04:59.134340  228393 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token zpa5vb.x9uzq70jjdgco8ye --discovery-token-ca-cert-hash sha256:937c4239101ec8b12459e4fa3de0759350fbf81fa4f52752b966f06f42d7d7ec 
	I0717 19:04:59.138580  228393 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0717 19:04:59.138636  228393 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zpa5vb.x9uzq70jjdgco8ye --discovery-token-ca-cert-hash sha256:937c4239101ec8b12459e4fa3de0759350fbf81fa4f52752b966f06f42d7d7ec --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-549411-m02"
	I0717 19:04:59.174566  228393 command_runner.go:130] ! W0717 19:04:59.174035    1113 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0717 19:04:59.202986  228393 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1037-gcp\n", err: exit status 1
	I0717 19:04:59.266266  228393 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 19:05:01.395188  228393 command_runner.go:130] > [preflight] Running pre-flight checks
	I0717 19:05:01.395215  228393 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0717 19:05:01.395227  228393 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1037-gcp
	I0717 19:05:01.395233  228393 command_runner.go:130] > OS: Linux
	I0717 19:05:01.395238  228393 command_runner.go:130] > CGROUPS_CPU: enabled
	I0717 19:05:01.395243  228393 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0717 19:05:01.395248  228393 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0717 19:05:01.395253  228393 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0717 19:05:01.395258  228393 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0717 19:05:01.395263  228393 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0717 19:05:01.395269  228393 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0717 19:05:01.395277  228393 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0717 19:05:01.395282  228393 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0717 19:05:01.395290  228393 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0717 19:05:01.395298  228393 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0717 19:05:01.395312  228393 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 19:05:01.395318  228393 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 19:05:01.395323  228393 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0717 19:05:01.395335  228393 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0717 19:05:01.395342  228393 command_runner.go:130] > This node has joined the cluster:
	I0717 19:05:01.395350  228393 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0717 19:05:01.395358  228393 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0717 19:05:01.395364  228393 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0717 19:05:01.395384  228393 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zpa5vb.x9uzq70jjdgco8ye --discovery-token-ca-cert-hash sha256:937c4239101ec8b12459e4fa3de0759350fbf81fa4f52752b966f06f42d7d7ec --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-549411-m02": (2.256731611s)
	I0717 19:05:01.395406  228393 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0717 19:05:01.482452  228393 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0717 19:05:01.559353  228393 start.go:306] JoinCluster complete in 2.579243708s
	I0717 19:05:01.559383  228393 cni.go:84] Creating CNI manager for ""
	I0717 19:05:01.559389  228393 cni.go:137] 2 nodes found, recommending kindnet
	I0717 19:05:01.559435  228393 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 19:05:01.563206  228393 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0717 19:05:01.563235  228393 command_runner.go:130] >   Size: 3955775   	Blocks: 7736       IO Block: 4096   regular file
	I0717 19:05:01.563246  228393 command_runner.go:130] > Device: 37h/55d	Inode: 565096      Links: 1
	I0717 19:05:01.563252  228393 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0717 19:05:01.563260  228393 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I0717 19:05:01.563265  228393 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I0717 19:05:01.563270  228393 command_runner.go:130] > Change: 2023-07-17 18:45:36.078726379 +0000
	I0717 19:05:01.563278  228393 command_runner.go:130] >  Birth: 2023-07-17 18:45:36.054724642 +0000
	I0717 19:05:01.563354  228393 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0717 19:05:01.563365  228393 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 19:05:01.579836  228393 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 19:05:01.830187  228393 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0717 19:05:01.833932  228393 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0717 19:05:01.836637  228393 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0717 19:05:01.848652  228393 command_runner.go:130] > daemonset.apps/kindnet configured
	I0717 19:05:01.853053  228393 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16890-138069/kubeconfig
	I0717 19:05:01.853335  228393 kapi.go:59] client config for multinode-549411: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/client.key", CAFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 19:05:01.853664  228393 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0717 19:05:01.853676  228393 round_trippers.go:469] Request Headers:
	I0717 19:05:01.853684  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:05:01.853690  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:05:01.856574  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:05:01.856599  228393 round_trippers.go:577] Response Headers:
	I0717 19:05:01.856607  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:05:01 GMT
	I0717 19:05:01.856614  228393 round_trippers.go:580]     Audit-Id: d28af0c8-9346-4124-abe3-d4ba2d524bfa
	I0717 19:05:01.856619  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:05:01.856625  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:05:01.856630  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:05:01.856636  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:05:01.856643  228393 round_trippers.go:580]     Content-Length: 291
	I0717 19:05:01.856690  228393 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0a0aff40-ae4c-45a2-85d1-4b9fe202ee82","resourceVersion":"444","creationTimestamp":"2023-07-17T19:03:59Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0717 19:05:01.856788  228393 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-549411" context rescaled to 1 replicas
	I0717 19:05:01.856818  228393 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0717 19:05:01.860567  228393 out.go:177] * Verifying Kubernetes components...
	I0717 19:05:01.862304  228393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:05:01.874000  228393 loader.go:373] Config loaded from file:  /home/jenkins/minikube-integration/16890-138069/kubeconfig
	I0717 19:05:01.874236  228393 kapi.go:59] client config for multinode-549411: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/profiles/multinode-549411/client.key", CAFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 19:05:01.874487  228393 node_ready.go:35] waiting up to 6m0s for node "multinode-549411-m02" to be "Ready" ...
	I0717 19:05:01.874552  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411-m02
	I0717 19:05:01.874556  228393 round_trippers.go:469] Request Headers:
	I0717 19:05:01.874564  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:05:01.874571  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:05:01.876967  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:05:01.876992  228393 round_trippers.go:577] Response Headers:
	I0717 19:05:01.877002  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:05:01 GMT
	I0717 19:05:01.877012  228393 round_trippers.go:580]     Audit-Id: 3e83ddbb-7b1e-4787-9ffa-e27445a678bc
	I0717 19:05:01.877021  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:05:01.877030  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:05:01.877043  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:05:01.877050  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:05:01.877195  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411-m02","uid":"59b13224-e146-4ea6-a6f7-008368de59b9","resourceVersion":"481","creationTimestamp":"2023-07-17T19:05:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:05:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:05:01Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I0717 19:05:02.378324  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411-m02
	I0717 19:05:02.378345  228393 round_trippers.go:469] Request Headers:
	I0717 19:05:02.378353  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:05:02.378360  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:05:02.380721  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:05:02.380744  228393 round_trippers.go:577] Response Headers:
	I0717 19:05:02.380754  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:05:02.380761  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:05:02 GMT
	I0717 19:05:02.380769  228393 round_trippers.go:580]     Audit-Id: 0c5facdb-f2d8-4810-b080-989d660bb918
	I0717 19:05:02.380776  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:05:02.380784  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:05:02.380794  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:05:02.380893  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411-m02","uid":"59b13224-e146-4ea6-a6f7-008368de59b9","resourceVersion":"487","creationTimestamp":"2023-07-17T19:05:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:05:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:05:0
1Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5210 chars]
	I0717 19:05:02.877738  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411-m02
	I0717 19:05:02.877762  228393 round_trippers.go:469] Request Headers:
	I0717 19:05:02.877773  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:05:02.877781  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:05:02.880297  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:05:02.880321  228393 round_trippers.go:577] Response Headers:
	I0717 19:05:02.880332  228393 round_trippers.go:580]     Audit-Id: e43110c9-27b3-45d4-ab9a-6e4736c5c78a
	I0717 19:05:02.880342  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:05:02.880351  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:05:02.880364  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:05:02.880373  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:05:02.880380  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:05:02 GMT
	I0717 19:05:02.880516  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411-m02","uid":"59b13224-e146-4ea6-a6f7-008368de59b9","resourceVersion":"487","creationTimestamp":"2023-07-17T19:05:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:05:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:05:0
1Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5210 chars]
	I0717 19:05:03.378058  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411-m02
	I0717 19:05:03.378080  228393 round_trippers.go:469] Request Headers:
	I0717 19:05:03.378089  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:05:03.378095  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:05:03.380545  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:05:03.380578  228393 round_trippers.go:577] Response Headers:
	I0717 19:05:03.380590  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:05:03.380604  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:05:03.380614  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:05:03 GMT
	I0717 19:05:03.380627  228393 round_trippers.go:580]     Audit-Id: 5d41a30e-dadc-4a9a-a1ef-67cf57ab5ba7
	I0717 19:05:03.380639  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:05:03.380652  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:05:03.380770  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411-m02","uid":"59b13224-e146-4ea6-a6f7-008368de59b9","resourceVersion":"501","creationTimestamp":"2023-07-17T19:05:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:05:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:05:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5296 chars]
	I0717 19:05:03.381108  228393 node_ready.go:49] node "multinode-549411-m02" has status "Ready":"True"
	I0717 19:05:03.381122  228393 node_ready.go:38] duration metric: took 1.506625738s waiting for node "multinode-549411-m02" to be "Ready" ...
	I0717 19:05:03.381132  228393 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:05:03.381194  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0717 19:05:03.381202  228393 round_trippers.go:469] Request Headers:
	I0717 19:05:03.381209  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:05:03.381215  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:05:03.384292  228393 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0717 19:05:03.384319  228393 round_trippers.go:577] Response Headers:
	I0717 19:05:03.384329  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:05:03.384339  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:05:03 GMT
	I0717 19:05:03.384349  228393 round_trippers.go:580]     Audit-Id: f865f79e-6f48-497f-ae63-96d8c2e3f940
	I0717 19:05:03.384358  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:05:03.384367  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:05:03.384383  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:05:03.384988  228393 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"502"},"items":[{"metadata":{"name":"coredns-5d78c9869d-98dl8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"0b962161-8aa7-48e3-bfab-c96b8fcdeb95","resourceVersion":"440","creationTimestamp":"2023-07-17T19:04:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ff6caf3c-f3bb-45e6-87e6-31a61699767c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:04:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ff6caf3c-f3bb-45e6-87e6-31a61699767c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68974 chars]
	I0717 19:05:03.387127  228393 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-98dl8" in "kube-system" namespace to be "Ready" ...
	I0717 19:05:03.387195  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-98dl8
	I0717 19:05:03.387203  228393 round_trippers.go:469] Request Headers:
	I0717 19:05:03.387210  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:05:03.387216  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:05:03.389187  228393 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 19:05:03.389212  228393 round_trippers.go:577] Response Headers:
	I0717 19:05:03.389222  228393 round_trippers.go:580]     Audit-Id: 7ecbe0c2-5f3c-4b3d-8ef0-1cbaee71efb1
	I0717 19:05:03.389232  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:05:03.389241  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:05:03.389250  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:05:03.389260  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:05:03.389268  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:05:03 GMT
	I0717 19:05:03.389359  228393 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-98dl8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"0b962161-8aa7-48e3-bfab-c96b8fcdeb95","resourceVersion":"440","creationTimestamp":"2023-07-17T19:04:12Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ff6caf3c-f3bb-45e6-87e6-31a61699767c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:04:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ff6caf3c-f3bb-45e6-87e6-31a61699767c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0717 19:05:03.389817  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:05:03.389829  228393 round_trippers.go:469] Request Headers:
	I0717 19:05:03.389836  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:05:03.389842  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:05:03.391607  228393 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 19:05:03.391628  228393 round_trippers.go:577] Response Headers:
	I0717 19:05:03.391638  228393 round_trippers.go:580]     Audit-Id: 8a8eed03-c17f-48b3-9cc8-05bf42203ad0
	I0717 19:05:03.391646  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:05:03.391656  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:05:03.391664  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:05:03.391676  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:05:03.391686  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:05:03 GMT
	I0717 19:05:03.391800  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"424","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0717 19:05:03.392109  228393 pod_ready.go:92] pod "coredns-5d78c9869d-98dl8" in "kube-system" namespace has status "Ready":"True"
	I0717 19:05:03.392124  228393 pod_ready.go:81] duration metric: took 4.977375ms waiting for pod "coredns-5d78c9869d-98dl8" in "kube-system" namespace to be "Ready" ...
	I0717 19:05:03.392135  228393 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-549411" in "kube-system" namespace to be "Ready" ...
	I0717 19:05:03.392183  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-549411
	I0717 19:05:03.392189  228393 round_trippers.go:469] Request Headers:
	I0717 19:05:03.392196  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:05:03.392208  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:05:03.394114  228393 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 19:05:03.394133  228393 round_trippers.go:577] Response Headers:
	I0717 19:05:03.394142  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:05:03 GMT
	I0717 19:05:03.394149  228393 round_trippers.go:580]     Audit-Id: 2b4f2264-dede-44e5-ad64-b68b07410aa3
	I0717 19:05:03.394158  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:05:03.394167  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:05:03.394180  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:05:03.394193  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:05:03.394285  228393 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-549411","namespace":"kube-system","uid":"b8bd6e94-7419-4088-922a-844632299e1c","resourceVersion":"304","creationTimestamp":"2023-07-17T19:03:59Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"c5621a2a52a4124e1c104e10aea0070e","kubernetes.io/config.mirror":"c5621a2a52a4124e1c104e10aea0070e","kubernetes.io/config.seen":"2023-07-17T19:03:59.528917007Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:03:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0717 19:05:03.394619  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:05:03.394633  228393 round_trippers.go:469] Request Headers:
	I0717 19:05:03.394640  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:05:03.394647  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:05:03.396393  228393 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 19:05:03.396408  228393 round_trippers.go:577] Response Headers:
	I0717 19:05:03.396415  228393 round_trippers.go:580]     Audit-Id: c833f3ab-3a73-4390-b4c9-ddd1f6285a07
	I0717 19:05:03.396420  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:05:03.396427  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:05:03.396433  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:05:03.396438  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:05:03.396447  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:05:03 GMT
	I0717 19:05:03.396606  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"424","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0717 19:05:03.396885  228393 pod_ready.go:92] pod "etcd-multinode-549411" in "kube-system" namespace has status "Ready":"True"
	I0717 19:05:03.396897  228393 pod_ready.go:81] duration metric: took 4.753953ms waiting for pod "etcd-multinode-549411" in "kube-system" namespace to be "Ready" ...
	I0717 19:05:03.396911  228393 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-549411" in "kube-system" namespace to be "Ready" ...
	I0717 19:05:03.396957  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-549411
	I0717 19:05:03.396964  228393 round_trippers.go:469] Request Headers:
	I0717 19:05:03.396970  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:05:03.396977  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:05:03.398668  228393 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 19:05:03.398691  228393 round_trippers.go:577] Response Headers:
	I0717 19:05:03.398699  228393 round_trippers.go:580]     Audit-Id: 344f30b9-d26c-465e-b484-a7dbe55bfa86
	I0717 19:05:03.398705  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:05:03.398711  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:05:03.398719  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:05:03.398727  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:05:03.398740  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:05:03 GMT
	I0717 19:05:03.398857  228393 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-549411","namespace":"kube-system","uid":"b26f076b-6354-45ef-b7c2-c8ff8b7dbc15","resourceVersion":"318","creationTimestamp":"2023-07-17T19:03:59Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"a8b15805fc8c0e859b710d18c398b2d8","kubernetes.io/config.mirror":"a8b15805fc8c0e859b710d18c398b2d8","kubernetes.io/config.seen":"2023-07-17T19:03:59.528920779Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:03:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0717 19:05:03.399306  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:05:03.399319  228393 round_trippers.go:469] Request Headers:
	I0717 19:05:03.399326  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:05:03.399333  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:05:03.401338  228393 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 19:05:03.401356  228393 round_trippers.go:577] Response Headers:
	I0717 19:05:03.401363  228393 round_trippers.go:580]     Audit-Id: 76a81800-8a39-4d34-ae9a-d53adb71e59d
	I0717 19:05:03.401369  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:05:03.401374  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:05:03.401379  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:05:03.401384  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:05:03.401390  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:05:03 GMT
	I0717 19:05:03.401484  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"424","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0717 19:05:03.401752  228393 pod_ready.go:92] pod "kube-apiserver-multinode-549411" in "kube-system" namespace has status "Ready":"True"
	I0717 19:05:03.401764  228393 pod_ready.go:81] duration metric: took 4.844814ms waiting for pod "kube-apiserver-multinode-549411" in "kube-system" namespace to be "Ready" ...
	I0717 19:05:03.401774  228393 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-549411" in "kube-system" namespace to be "Ready" ...
	I0717 19:05:03.401817  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-549411
	I0717 19:05:03.401824  228393 round_trippers.go:469] Request Headers:
	I0717 19:05:03.401831  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:05:03.401838  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:05:03.403944  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:05:03.403961  228393 round_trippers.go:577] Response Headers:
	I0717 19:05:03.403967  228393 round_trippers.go:580]     Audit-Id: ecda6c45-17f2-47a4-942d-4471d28a93e8
	I0717 19:05:03.403992  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:05:03.404003  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:05:03.404014  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:05:03.404020  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:05:03.404032  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:05:03 GMT
	I0717 19:05:03.404175  228393 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-549411","namespace":"kube-system","uid":"f4c024ba-c455-4ab3-af54-817f307a1f1a","resourceVersion":"292","creationTimestamp":"2023-07-17T19:03:59Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"67fc696688ad595b16a94c8761f652ef","kubernetes.io/config.mirror":"67fc696688ad595b16a94c8761f652ef","kubernetes.io/config.seen":"2023-07-17T19:03:59.528921974Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:03:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0717 19:05:03.404561  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:05:03.404572  228393 round_trippers.go:469] Request Headers:
	I0717 19:05:03.404578  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:05:03.404584  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:05:03.406233  228393 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0717 19:05:03.406248  228393 round_trippers.go:577] Response Headers:
	I0717 19:05:03.406254  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:05:03 GMT
	I0717 19:05:03.406260  228393 round_trippers.go:580]     Audit-Id: 1042c1ec-7a7d-4c3f-8f37-0e8af08413a1
	I0717 19:05:03.406265  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:05:03.406270  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:05:03.406276  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:05:03.406281  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:05:03.406415  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"424","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0717 19:05:03.406705  228393 pod_ready.go:92] pod "kube-controller-manager-multinode-549411" in "kube-system" namespace has status "Ready":"True"
	I0717 19:05:03.406721  228393 pod_ready.go:81] duration metric: took 4.940195ms waiting for pod "kube-controller-manager-multinode-549411" in "kube-system" namespace to be "Ready" ...
	I0717 19:05:03.406733  228393 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hzb9w" in "kube-system" namespace to be "Ready" ...
	I0717 19:05:03.578058  228393 request.go:628] Waited for 171.252692ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hzb9w
	I0717 19:05:03.578144  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hzb9w
	I0717 19:05:03.578151  228393 round_trippers.go:469] Request Headers:
	I0717 19:05:03.578162  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:05:03.578174  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:05:03.580788  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:05:03.580815  228393 round_trippers.go:577] Response Headers:
	I0717 19:05:03.580825  228393 round_trippers.go:580]     Audit-Id: 877f2e29-fd7c-4ed9-8886-fea53d1f34a9
	I0717 19:05:03.580834  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:05:03.580843  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:05:03.580851  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:05:03.580864  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:05:03.580880  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:05:03 GMT
	I0717 19:05:03.581003  228393 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hzb9w","generateName":"kube-proxy-","namespace":"kube-system","uid":"612c55b1-0ad0-4c37-80d1-931cdd2767aa","resourceVersion":"401","creationTimestamp":"2023-07-17T19:04:11Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"70320397-70b5-4707-9e0c-bffe37cfd3e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:04:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"70320397-70b5-4707-9e0c-bffe37cfd3e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0717 19:05:03.778840  228393 request.go:628] Waited for 197.382736ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:05:03.778907  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:05:03.778912  228393 round_trippers.go:469] Request Headers:
	I0717 19:05:03.778925  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:05:03.778934  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:05:03.781392  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:05:03.781418  228393 round_trippers.go:577] Response Headers:
	I0717 19:05:03.781428  228393 round_trippers.go:580]     Audit-Id: 019061c5-7ac0-47b5-9b73-faf31ea5ddbb
	I0717 19:05:03.781435  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:05:03.781442  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:05:03.781450  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:05:03.781458  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:05:03.781467  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:05:03 GMT
	I0717 19:05:03.781589  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"424","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0717 19:05:03.781923  228393 pod_ready.go:92] pod "kube-proxy-hzb9w" in "kube-system" namespace has status "Ready":"True"
	I0717 19:05:03.781943  228393 pod_ready.go:81] duration metric: took 375.201143ms waiting for pod "kube-proxy-hzb9w" in "kube-system" namespace to be "Ready" ...
	I0717 19:05:03.781957  228393 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-sq6cs" in "kube-system" namespace to be "Ready" ...
	I0717 19:05:03.978436  228393 request.go:628] Waited for 196.394451ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sq6cs
	I0717 19:05:03.978510  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sq6cs
	I0717 19:05:03.978515  228393 round_trippers.go:469] Request Headers:
	I0717 19:05:03.978523  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:05:03.978530  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:05:03.981079  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:05:03.981104  228393 round_trippers.go:577] Response Headers:
	I0717 19:05:03.981114  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:05:03.981122  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:05:03 GMT
	I0717 19:05:03.981130  228393 round_trippers.go:580]     Audit-Id: 88872df8-3e59-4eeb-b21c-1c73f63f92cb
	I0717 19:05:03.981138  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:05:03.981150  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:05:03.981159  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:05:03.981322  228393 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-sq6cs","generateName":"kube-proxy-","namespace":"kube-system","uid":"5c0c6d53-9e39-4b36-bea8-24d39999a3bf","resourceVersion":"495","creationTimestamp":"2023-07-17T19:05:01Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"70320397-70b5-4707-9e0c-bffe37cfd3e4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:05:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"70320397-70b5-4707-9e0c-bffe37cfd3e4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0717 19:05:04.178165  228393 request.go:628] Waited for 196.300169ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-549411-m02
	I0717 19:05:04.178251  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411-m02
	I0717 19:05:04.178257  228393 round_trippers.go:469] Request Headers:
	I0717 19:05:04.178265  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:05:04.178271  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:05:04.180826  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:05:04.180845  228393 round_trippers.go:577] Response Headers:
	I0717 19:05:04.180853  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:05:04 GMT
	I0717 19:05:04.180859  228393 round_trippers.go:580]     Audit-Id: 55bad2c6-9fa9-4caf-8aeb-3f433187b4ef
	I0717 19:05:04.180864  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:05:04.180871  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:05:04.180877  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:05:04.180882  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:05:04.181032  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411-m02","uid":"59b13224-e146-4ea6-a6f7-008368de59b9","resourceVersion":"501","creationTimestamp":"2023-07-17T19:05:01Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:05:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:05:01Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5296 chars]
	I0717 19:05:04.181460  228393 pod_ready.go:92] pod "kube-proxy-sq6cs" in "kube-system" namespace has status "Ready":"True"
	I0717 19:05:04.181479  228393 pod_ready.go:81] duration metric: took 399.513334ms waiting for pod "kube-proxy-sq6cs" in "kube-system" namespace to be "Ready" ...
	I0717 19:05:04.181492  228393 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-549411" in "kube-system" namespace to be "Ready" ...
	I0717 19:05:04.378989  228393 request.go:628] Waited for 197.393454ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-549411
	I0717 19:05:04.379049  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-549411
	I0717 19:05:04.379056  228393 round_trippers.go:469] Request Headers:
	I0717 19:05:04.379066  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:05:04.379084  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:05:04.381657  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:05:04.381681  228393 round_trippers.go:577] Response Headers:
	I0717 19:05:04.381692  228393 round_trippers.go:580]     Audit-Id: 95e00572-6e85-4e25-90bb-a1823949bb2f
	I0717 19:05:04.381702  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:05:04.381711  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:05:04.381720  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:05:04.381727  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:05:04.381739  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:05:04 GMT
	I0717 19:05:04.381851  228393 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-549411","namespace":"kube-system","uid":"ec3f8f05-ca8c-40fe-b852-06306bfeb4f0","resourceVersion":"325","creationTimestamp":"2023-07-17T19:03:57Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"a1521f8f9a6e2a2d24ff9b0f01c1b786","kubernetes.io/config.mirror":"a1521f8f9a6e2a2d24ff9b0f01c1b786","kubernetes.io/config.seen":"2023-07-17T19:03:53.549919528Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-17T19:03:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0717 19:05:04.578736  228393 request.go:628] Waited for 196.423798ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:05:04.578807  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-549411
	I0717 19:05:04.578813  228393 round_trippers.go:469] Request Headers:
	I0717 19:05:04.578820  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:05:04.578827  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:05:04.581375  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:05:04.581397  228393 round_trippers.go:577] Response Headers:
	I0717 19:05:04.581405  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:05:04.581411  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:05:04 GMT
	I0717 19:05:04.581417  228393 round_trippers.go:580]     Audit-Id: a0fa8b16-3825-4e42-ab1e-25f46e4b66d7
	I0717 19:05:04.581422  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:05:04.581427  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:05:04.581434  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:05:04.581605  228393 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"424","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-07-17T19:03:56Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0717 19:05:04.581971  228393 pod_ready.go:92] pod "kube-scheduler-multinode-549411" in "kube-system" namespace has status "Ready":"True"
	I0717 19:05:04.581989  228393 pod_ready.go:81] duration metric: took 400.488516ms waiting for pod "kube-scheduler-multinode-549411" in "kube-system" namespace to be "Ready" ...
	I0717 19:05:04.582000  228393 pod_ready.go:38] duration metric: took 1.200853247s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:05:04.582017  228393 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 19:05:04.582073  228393 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:05:04.592866  228393 system_svc.go:56] duration metric: took 10.839256ms WaitForService to wait for kubelet.
	I0717 19:05:04.592897  228393 kubeadm.go:581] duration metric: took 2.736054635s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 19:05:04.592924  228393 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:05:04.778462  228393 request.go:628] Waited for 185.446704ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0717 19:05:04.778606  228393 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0717 19:05:04.778632  228393 round_trippers.go:469] Request Headers:
	I0717 19:05:04.778641  228393 round_trippers.go:473]     Accept: application/json, */*
	I0717 19:05:04.778654  228393 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0717 19:05:04.781338  228393 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0717 19:05:04.781366  228393 round_trippers.go:577] Response Headers:
	I0717 19:05:04.781375  228393 round_trippers.go:580]     Content-Type: application/json
	I0717 19:05:04.781382  228393 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ba95f413-59e3-44a3-b671-f9f138a8579b
	I0717 19:05:04.781389  228393 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 2434b165-3c69-48d4-bc5c-628c496f7b0c
	I0717 19:05:04.781394  228393 round_trippers.go:580]     Date: Mon, 17 Jul 2023 19:05:04 GMT
	I0717 19:05:04.781400  228393 round_trippers.go:580]     Audit-Id: 8a995908-8baf-495f-8782-c435314c6753
	I0717 19:05:04.781406  228393 round_trippers.go:580]     Cache-Control: no-cache, private
	I0717 19:05:04.781577  228393 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"502"},"items":[{"metadata":{"name":"multinode-549411","uid":"ae49d2ce-f30e-437f-927f-d21e86ffcb0e","resourceVersion":"424","creationTimestamp":"2023-07-17T19:03:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-549411","kubernetes.io/os":"linux","minikube.k8s.io/commit":"46c8e7c4243e42e98d29628785c0523fbabbd9b5","minikube.k8s.io/name":"multinode-549411","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_17T19_04_00_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12288 chars]
	I0717 19:05:04.782110  228393 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0717 19:05:04.782131  228393 node_conditions.go:123] node cpu capacity is 8
	I0717 19:05:04.782141  228393 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0717 19:05:04.782144  228393 node_conditions.go:123] node cpu capacity is 8
	I0717 19:05:04.782148  228393 node_conditions.go:105] duration metric: took 189.219254ms to run NodePressure ...
	I0717 19:05:04.782162  228393 start.go:228] waiting for startup goroutines ...
	I0717 19:05:04.782234  228393 start.go:242] writing updated cluster config ...
	I0717 19:05:04.782593  228393 ssh_runner.go:195] Run: rm -f paused
	I0717 19:05:04.831392  228393 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 19:05:04.834938  228393 out.go:177] * Done! kubectl is now configured to use "multinode-549411" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Jul 17 19:04:44 multinode-549411 crio[957]: time="2023-07-17 19:04:44.927787280Z" level=info msg="Created container f35793354c4bd5ca8487f32e8f47d92913e091f2816281350c53a950773d41d1: kube-system/coredns-5d78c9869d-98dl8/coredns" id=b67f54b3-ad4f-48f7-9d59-5832e1dbcde8 name=/runtime.v1.RuntimeService/CreateContainer
	Jul 17 19:04:44 multinode-549411 crio[957]: time="2023-07-17 19:04:44.927846386Z" level=info msg="Starting container: e637d168846d91619af8765e336e0c07c0e081f85d56e810045853dc7fe2c4fa" id=c6acb6bf-e976-4e15-a9ff-4061e44c12e9 name=/runtime.v1.RuntimeService/StartContainer
	Jul 17 19:04:44 multinode-549411 crio[957]: time="2023-07-17 19:04:44.928264433Z" level=info msg="Starting container: f35793354c4bd5ca8487f32e8f47d92913e091f2816281350c53a950773d41d1" id=e5b9fdd1-fca3-46d4-8f05-0b2abba5ca92 name=/runtime.v1.RuntimeService/StartContainer
	Jul 17 19:04:44 multinode-549411 crio[957]: time="2023-07-17 19:04:44.938177400Z" level=info msg="Started container" PID=2335 containerID=e637d168846d91619af8765e336e0c07c0e081f85d56e810045853dc7fe2c4fa description=kube-system/storage-provisioner/storage-provisioner id=c6acb6bf-e976-4e15-a9ff-4061e44c12e9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ddb1950a6caff39803f125f82d576e94aec1e85c8afdc58533081e49cb938cd7
	Jul 17 19:04:44 multinode-549411 crio[957]: time="2023-07-17 19:04:44.961354157Z" level=info msg="Started container" PID=2342 containerID=f35793354c4bd5ca8487f32e8f47d92913e091f2816281350c53a950773d41d1 description=kube-system/coredns-5d78c9869d-98dl8/coredns id=e5b9fdd1-fca3-46d4-8f05-0b2abba5ca92 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dbe5638cb2e916abc8befe370c5de545c8130edcf65f90e8bea166191f174cfc
	Jul 17 19:05:05 multinode-549411 crio[957]: time="2023-07-17 19:05:05.869126193Z" level=info msg="Running pod sandbox: default/busybox-67b7f59bb-rww5s/POD" id=9de2ed67-82e4-4803-84ff-52d682714c87 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jul 17 19:05:05 multinode-549411 crio[957]: time="2023-07-17 19:05:05.869224126Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 17 19:05:05 multinode-549411 crio[957]: time="2023-07-17 19:05:05.883460744Z" level=info msg="Got pod network &{Name:busybox-67b7f59bb-rww5s Namespace:default ID:c0e0e28e4a661d4f61f8246be0dc7da7d4d93741b9edca47584a36eaeb30643d UID:51cd4c79-659e-4f2d-95e8-2c4d62a68d37 NetNS:/var/run/netns/e0293456-794e-498f-b27f-32e13cedff72 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 17 19:05:05 multinode-549411 crio[957]: time="2023-07-17 19:05:05.883499125Z" level=info msg="Adding pod default_busybox-67b7f59bb-rww5s to CNI network \"kindnet\" (type=ptp)"
	Jul 17 19:05:05 multinode-549411 crio[957]: time="2023-07-17 19:05:05.892401085Z" level=info msg="Got pod network &{Name:busybox-67b7f59bb-rww5s Namespace:default ID:c0e0e28e4a661d4f61f8246be0dc7da7d4d93741b9edca47584a36eaeb30643d UID:51cd4c79-659e-4f2d-95e8-2c4d62a68d37 NetNS:/var/run/netns/e0293456-794e-498f-b27f-32e13cedff72 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Jul 17 19:05:05 multinode-549411 crio[957]: time="2023-07-17 19:05:05.892532518Z" level=info msg="Checking pod default_busybox-67b7f59bb-rww5s for CNI network kindnet (type=ptp)"
	Jul 17 19:05:05 multinode-549411 crio[957]: time="2023-07-17 19:05:05.907558466Z" level=info msg="Ran pod sandbox c0e0e28e4a661d4f61f8246be0dc7da7d4d93741b9edca47584a36eaeb30643d with infra container: default/busybox-67b7f59bb-rww5s/POD" id=9de2ed67-82e4-4803-84ff-52d682714c87 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jul 17 19:05:05 multinode-549411 crio[957]: time="2023-07-17 19:05:05.908790656Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=1e05a4fe-b2fe-4f41-a37e-21779140d45d name=/runtime.v1.ImageService/ImageStatus
	Jul 17 19:05:05 multinode-549411 crio[957]: time="2023-07-17 19:05:05.908993938Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=1e05a4fe-b2fe-4f41-a37e-21779140d45d name=/runtime.v1.ImageService/ImageStatus
	Jul 17 19:05:05 multinode-549411 crio[957]: time="2023-07-17 19:05:05.909764706Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=146ff7ba-3b57-4fdd-9c67-68486f0e544c name=/runtime.v1.ImageService/PullImage
	Jul 17 19:05:05 multinode-549411 crio[957]: time="2023-07-17 19:05:05.913701856Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jul 17 19:05:06 multinode-549411 crio[957]: time="2023-07-17 19:05:06.164580577Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Jul 17 19:05:06 multinode-549411 crio[957]: time="2023-07-17 19:05:06.651335888Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=146ff7ba-3b57-4fdd-9c67-68486f0e544c name=/runtime.v1.ImageService/PullImage
	Jul 17 19:05:06 multinode-549411 crio[957]: time="2023-07-17 19:05:06.652397304Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=d5a819c0-0324-4ed0-87f1-b689bf2cbe40 name=/runtime.v1.ImageService/ImageStatus
	Jul 17 19:05:06 multinode-549411 crio[957]: time="2023-07-17 19:05:06.653119823Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=d5a819c0-0324-4ed0-87f1-b689bf2cbe40 name=/runtime.v1.ImageService/ImageStatus
	Jul 17 19:05:06 multinode-549411 crio[957]: time="2023-07-17 19:05:06.653872265Z" level=info msg="Creating container: default/busybox-67b7f59bb-rww5s/busybox" id=2e2628c0-adc6-455d-a7fe-381b133e6af7 name=/runtime.v1.RuntimeService/CreateContainer
	Jul 17 19:05:06 multinode-549411 crio[957]: time="2023-07-17 19:05:06.653969512Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 17 19:05:06 multinode-549411 crio[957]: time="2023-07-17 19:05:06.734502776Z" level=info msg="Created container 673dc90c81d76e6a7c466b04029e440209ad5ecfd9411d0254040c6b138d185e: default/busybox-67b7f59bb-rww5s/busybox" id=2e2628c0-adc6-455d-a7fe-381b133e6af7 name=/runtime.v1.RuntimeService/CreateContainer
	Jul 17 19:05:06 multinode-549411 crio[957]: time="2023-07-17 19:05:06.735130786Z" level=info msg="Starting container: 673dc90c81d76e6a7c466b04029e440209ad5ecfd9411d0254040c6b138d185e" id=d5399110-2100-4299-94be-eb66cbca36a9 name=/runtime.v1.RuntimeService/StartContainer
	Jul 17 19:05:06 multinode-549411 crio[957]: time="2023-07-17 19:05:06.744844221Z" level=info msg="Started container" PID=2506 containerID=673dc90c81d76e6a7c466b04029e440209ad5ecfd9411d0254040c6b138d185e description=default/busybox-67b7f59bb-rww5s/busybox id=d5399110-2100-4299-94be-eb66cbca36a9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c0e0e28e4a661d4f61f8246be0dc7da7d4d93741b9edca47584a36eaeb30643d
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	673dc90c81d76       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 seconds ago        Running             busybox                   0                   c0e0e28e4a661       busybox-67b7f59bb-rww5s
	f35793354c4bd       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      25 seconds ago       Running             coredns                   0                   dbe5638cb2e91       coredns-5d78c9869d-98dl8
	e637d168846d9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      25 seconds ago       Running             storage-provisioner       0                   ddb1950a6caff       storage-provisioner
	fcc3499e8337d       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da                                      57 seconds ago       Running             kindnet-cni               0                   ee373e9d75536       kindnet-zjw42
	c89d3cecb29a9       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c                                      57 seconds ago       Running             kube-proxy                0                   c63ac61af43ad       kube-proxy-hzb9w
	d7f1e7f9adbba       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f                                      About a minute ago   Running             kube-controller-manager   0                   def76f368bb89       kube-controller-manager-multinode-549411
	e575cb230a4a4       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681                                      About a minute ago   Running             etcd                      0                   7c7dbcec672b9       etcd-multinode-549411
	32652e788f753       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a                                      About a minute ago   Running             kube-scheduler            0                   66e9718ae0fea       kube-scheduler-multinode-549411
	98f5a2770bd6c       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a                                      About a minute ago   Running             kube-apiserver            0                   ef824b8a984fc       kube-apiserver-multinode-549411
	
	* 
	* ==> coredns [f35793354c4bd5ca8487f32e8f47d92913e091f2816281350c53a950773d41d1] <==
	* [INFO] 10.244.1.2:56375 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000105664s
	[INFO] 10.244.0.3:58849 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111461s
	[INFO] 10.244.0.3:54183 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001799158s
	[INFO] 10.244.0.3:36169 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000104023s
	[INFO] 10.244.0.3:51732 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000054107s
	[INFO] 10.244.0.3:36326 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001299327s
	[INFO] 10.244.0.3:40973 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000063728s
	[INFO] 10.244.0.3:45481 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000040371s
	[INFO] 10.244.0.3:52951 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000055209s
	[INFO] 10.244.1.2:45807 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120886s
	[INFO] 10.244.1.2:48988 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000109406s
	[INFO] 10.244.1.2:50609 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00006176s
	[INFO] 10.244.1.2:39339 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067875s
	[INFO] 10.244.0.3:33880 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000093576s
	[INFO] 10.244.0.3:36647 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000075408s
	[INFO] 10.244.0.3:49735 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000072004s
	[INFO] 10.244.0.3:35425 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070302s
	[INFO] 10.244.1.2:43705 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000123168s
	[INFO] 10.244.1.2:38538 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000138599s
	[INFO] 10.244.1.2:44361 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000122229s
	[INFO] 10.244.1.2:47886 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000103839s
	[INFO] 10.244.0.3:35920 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117222s
	[INFO] 10.244.0.3:56582 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000099388s
	[INFO] 10.244.0.3:39444 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000055181s
	[INFO] 10.244.0.3:33457 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000072879s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-549411
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-549411
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5
	                    minikube.k8s.io/name=multinode-549411
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T19_04_00_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 19:03:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-549411
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 19:05:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 19:04:44 +0000   Mon, 17 Jul 2023 19:03:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 19:04:44 +0000   Mon, 17 Jul 2023 19:03:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 19:04:44 +0000   Mon, 17 Jul 2023 19:03:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 19:04:44 +0000   Mon, 17 Jul 2023 19:04:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-549411
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	System Info:
	  Machine ID:                 fbdeed9af6c048539cbed08c8163011f
	  System UUID:                29871ac4-b6eb-49aa-bc44-015c439e598e
	  Boot ID:                    72066744-0b12-457f-a61f-5086cdf4a210
	  Kernel Version:             5.15.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-rww5s                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 coredns-5d78c9869d-98dl8                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     58s
	  kube-system                 etcd-multinode-549411                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         71s
	  kube-system                 kindnet-zjw42                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      59s
	  kube-system                 kube-apiserver-multinode-549411             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 kube-controller-manager-multinode-549411    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 kube-proxy-hzb9w                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	  kube-system                 kube-scheduler-multinode-549411             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  77s (x9 over 77s)  kubelet          Node multinode-549411 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    77s (x8 over 77s)  kubelet          Node multinode-549411 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     77s (x7 over 77s)  kubelet          Node multinode-549411 status is now: NodeHasSufficientPID
	  Normal  Starting                 71s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  71s                kubelet          Node multinode-549411 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    71s                kubelet          Node multinode-549411 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     71s                kubelet          Node multinode-549411 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           59s                node-controller  Node multinode-549411 event: Registered Node multinode-549411 in Controller
	  Normal  NodeReady                26s                kubelet          Node multinode-549411 status is now: NodeReady
	
	
	Name:               multinode-549411-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-549411-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 19:05:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-549411-m02" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 19:05:03 +0000   Mon, 17 Jul 2023 19:05:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 19:05:03 +0000   Mon, 17 Jul 2023 19:05:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 19:05:03 +0000   Mon, 17 Jul 2023 19:05:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 19:05:03 +0000   Mon, 17 Jul 2023 19:05:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-549411-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	System Info:
	  Machine ID:                 c4c5462df9d54c94b973970f4423454e
	  System UUID:                97de0de5-9ac4-4e16-b67a-068423225d4a
	  Boot ID:                    72066744-0b12-457f-a61f-5086cdf4a210
	  Kernel Version:             5.15.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-8mh6q    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 kindnet-wplgr              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      9s
	  kube-system                 kube-proxy-sq6cs           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age               From             Message
	  ----    ------                   ----              ----             -------
	  Normal  Starting                 8s                kube-proxy       
	  Normal  RegisteredNode           9s                node-controller  Node multinode-549411-m02 event: Registered Node multinode-549411-m02 in Controller
	  Normal  NodeHasSufficientMemory  9s (x5 over 11s)  kubelet          Node multinode-549411-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x5 over 11s)  kubelet          Node multinode-549411-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x5 over 11s)  kubelet          Node multinode-549411-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7s                kubelet          Node multinode-549411-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.005023] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.008130] FS-Cache: N-cookie d=00000000b95e8ad8{9p.inode} n=00000000642f0408
	[  +0.008754] FS-Cache: N-key=[8] '89a30f0200000000'
	[  +0.281824] FS-Cache: Duplicate cookie detected
	[  +0.004719] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006751] FS-Cache: O-cookie d=00000000b95e8ad8{9p.inode} n=0000000025805cad
	[  +0.007382] FS-Cache: O-key=[8] '94a30f0200000000'
	[  +0.004943] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006556] FS-Cache: N-cookie d=00000000b95e8ad8{9p.inode} n=000000004cb50f63
	[  +0.007445] FS-Cache: N-key=[8] '94a30f0200000000'
	[Jul17 18:54] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jul17 18:56] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: f2 e3 c3 88 e1 8a 72 c9 83 e6 6a 29 08 00
	[  +1.019187] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000028] ll header: 00000000: f2 e3 c3 88 e1 8a 72 c9 83 e6 6a 29 08 00
	[  +2.015826] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: f2 e3 c3 88 e1 8a 72 c9 83 e6 6a 29 08 00
	[  +4.127678] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: f2 e3 c3 88 e1 8a 72 c9 83 e6 6a 29 08 00
	[  +8.191406] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: f2 e3 c3 88 e1 8a 72 c9 83 e6 6a 29 08 00
	[ +16.126801] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: f2 e3 c3 88 e1 8a 72 c9 83 e6 6a 29 08 00
	[Jul17 18:57] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: f2 e3 c3 88 e1 8a 72 c9 83 e6 6a 29 08 00
	
	* 
	* ==> etcd [e575cb230a4a4b180b66d790b5535bdb71c492bfb9a85ff1517f867aeb930760] <==
	* {"level":"info","ts":"2023-07-17T19:03:54.397Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-07-17T19:03:54.397Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-17T19:03:54.397Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-07-17T19:03:54.397Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-17T19:03:54.397Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-07-17T19:03:55.178Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-07-17T19:03:55.178Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-07-17T19:03:55.178Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-07-17T19:03:55.178Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-07-17T19:03:55.178Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-07-17T19:03:55.178Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-07-17T19:03:55.178Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-07-17T19:03:55.180Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T19:03:55.180Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-549411 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-17T19:03:55.180Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T19:03:55.180Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T19:03:55.181Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-17T19:03:55.181Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-17T19:03:55.181Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T19:03:55.181Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T19:03:55.181Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T19:03:55.182Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-07-17T19:03:55.182Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-17T19:04:51.134Z","caller":"traceutil/trace.go:171","msg":"trace[178668676] transaction","detail":"{read_only:false; response_revision:449; number_of_response:1; }","duration":"119.087594ms","start":"2023-07-17T19:04:51.015Z","end":"2023-07-17T19:04:51.134Z","steps":["trace[178668676] 'process raft request'  (duration: 118.963478ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T19:04:51.237Z","caller":"traceutil/trace.go:171","msg":"trace[1772000028] transaction","detail":"{read_only:false; response_revision:450; number_of_response:1; }","duration":"219.682672ms","start":"2023-07-17T19:04:51.017Z","end":"2023-07-17T19:04:51.237Z","steps":["trace[1772000028] 'process raft request'  (duration: 160.197771ms)","trace[1772000028] 'compare'  (duration: 59.364245ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  19:05:10 up  3:47,  0 users,  load average: 0.82, 1.10, 1.57
	Linux multinode-549411 5.15.0-1037-gcp #45~20.04.1-Ubuntu SMP Thu Jun 22 08:31:09 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [fcc3499e8337d9c1ad9f91e9c08483c6a935f6ee1fdfae2c2e87bd06e1c60bcb] <==
	* I0717 19:04:13.667923       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0717 19:04:13.668034       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I0717 19:04:13.668239       1 main.go:116] setting mtu 1500 for CNI 
	I0717 19:04:13.668264       1 main.go:146] kindnetd IP family: "ipv4"
	I0717 19:04:13.668293       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0717 19:04:43.896339       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0717 19:04:43.905460       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0717 19:04:43.905529       1 main.go:227] handling current node
	I0717 19:04:53.919398       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0717 19:04:53.919428       1 main.go:227] handling current node
	I0717 19:05:03.931817       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0717 19:05:03.931842       1 main.go:227] handling current node
	I0717 19:05:03.931853       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0717 19:05:03.931857       1 main.go:250] Node multinode-549411-m02 has CIDR [10.244.1.0/24] 
	I0717 19:05:03.932074       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [98f5a2770bd6ce0b2127196c123d3d2a8a19411535ea1ab32bb2615c70cd01b5] <==
	* I0717 19:03:56.362081       1 autoregister_controller.go:141] Starting autoregister controller
	I0717 19:03:56.362087       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 19:03:56.362094       1 cache.go:39] Caches are synced for autoregister controller
	I0717 19:03:56.363102       1 shared_informer.go:318] Caches are synced for configmaps
	I0717 19:03:56.368327       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0717 19:03:56.369640       1 controller.go:624] quota admission added evaluator for: namespaces
	E0717 19:03:56.376509       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E0717 19:03:56.466561       1 controller.go:150] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0717 19:03:56.578967       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 19:03:57.011404       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0717 19:03:57.230640       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0717 19:03:57.234864       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0717 19:03:57.234882       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 19:03:57.632631       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 19:03:57.666099       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 19:03:57.796914       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0717 19:03:57.802733       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0717 19:03:57.803835       1 controller.go:624] quota admission added evaluator for: endpoints
	I0717 19:03:57.807619       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0717 19:03:58.293322       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0717 19:03:59.472653       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0717 19:03:59.483136       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0717 19:03:59.493001       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0717 19:04:11.934936       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0717 19:04:12.785049       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [d7f1e7f9adbba1fd6164505b7cdf1e98677d91eef163600248bdf0bdffbfab91] <==
	* I0717 19:04:12.056063       1 shared_informer.go:318] Caches are synced for ephemeral
	I0717 19:04:12.079072       1 shared_informer.go:318] Caches are synced for stateful set
	I0717 19:04:12.083521       1 shared_informer.go:318] Caches are synced for persistent volume
	I0717 19:04:12.088955       1 shared_informer.go:318] Caches are synced for resource quota
	I0717 19:04:12.134943       1 shared_informer.go:318] Caches are synced for resource quota
	I0717 19:04:12.445996       1 shared_informer.go:318] Caches are synced for garbage collector
	I0717 19:04:12.465205       1 shared_informer.go:318] Caches are synced for garbage collector
	I0717 19:04:12.465234       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0717 19:04:12.790067       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5d78c9869d to 2"
	I0717 19:04:12.873523       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5d78c9869d to 1 from 2"
	I0717 19:04:12.977078       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-dt2wc"
	I0717 19:04:12.982684       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-98dl8"
	I0717 19:04:13.075288       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5d78c9869d-dt2wc"
	I0717 19:04:46.948211       1 node_lifecycle_controller.go:1046] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0717 19:05:01.284132       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-549411-m02\" does not exist"
	I0717 19:05:01.292058       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-549411-m02" podCIDRs=[10.244.1.0/24]
	I0717 19:05:01.294288       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-sq6cs"
	I0717 19:05:01.294387       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-wplgr"
	I0717 19:05:01.950207       1 event.go:307] "Event occurred" object="multinode-549411-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-549411-m02 event: Registered Node multinode-549411-m02 in Controller"
	I0717 19:05:01.950251       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-549411-m02"
	W0717 19:05:03.040561       1 topologycache.go:232] Can't get CPU or zone information for multinode-549411-m02 node
	I0717 19:05:05.545889       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-67b7f59bb to 2"
	I0717 19:05:05.554012       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-8mh6q"
	I0717 19:05:05.559057       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-rww5s"
	I0717 19:05:06.961821       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb-8mh6q" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-67b7f59bb-8mh6q"
	
	* 
	* ==> kube-proxy [c89d3cecb29a95b9435cdc404054a60a8c7a6fda9feabed2948d408cfc448d1e] <==
	* I0717 19:04:13.679353       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I0717 19:04:13.679604       1 server_others.go:110] "Detected node IP" address="192.168.58.2"
	I0717 19:04:13.679680       1 server_others.go:554] "Using iptables proxy"
	I0717 19:04:13.776456       1 server_others.go:192] "Using iptables Proxier"
	I0717 19:04:13.776511       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0717 19:04:13.776524       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0717 19:04:13.776540       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0717 19:04:13.776579       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 19:04:13.777335       1 server.go:658] "Version info" version="v1.27.3"
	I0717 19:04:13.777360       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 19:04:13.778058       1 config.go:97] "Starting endpoint slice config controller"
	I0717 19:04:13.778141       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 19:04:13.778096       1 config.go:315] "Starting node config controller"
	I0717 19:04:13.778664       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 19:04:13.778576       1 config.go:188] "Starting service config controller"
	I0717 19:04:13.778715       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 19:04:13.878754       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0717 19:04:13.878816       1 shared_informer.go:318] Caches are synced for node config
	I0717 19:04:13.879946       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [32652e788f75376aa1a7b569dea29770df8e250b5cb754383d3f30c95b6c3a0d] <==
	* W0717 19:03:56.372452       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0717 19:03:56.372526       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0717 19:03:56.372674       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0717 19:03:56.372732       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0717 19:03:56.372841       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0717 19:03:56.372893       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0717 19:03:56.372994       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 19:03:56.373046       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 19:03:56.373152       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 19:03:56.373202       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 19:03:56.373340       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0717 19:03:56.373392       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0717 19:03:56.373479       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 19:03:56.373527       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 19:03:56.373702       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 19:03:56.373765       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0717 19:03:56.380387       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 19:03:56.380492       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 19:03:57.242370       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0717 19:03:57.242402       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 19:03:57.432528       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 19:03:57.432561       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 19:03:57.474162       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 19:03:57.474198       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0717 19:03:59.969490       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Jul 17 19:04:12 multinode-549411 kubelet[1596]: E0717 19:04:12.166718    1596 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 17 19:04:12 multinode-549411 kubelet[1596]: E0717 19:04:12.166748    1596 projected.go:198] Error preparing data for projected volume kube-api-access-vccdt for pod kube-system/kube-proxy-hzb9w: configmap "kube-root-ca.crt" not found
	Jul 17 19:04:12 multinode-549411 kubelet[1596]: E0717 19:04:12.166815    1596 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/612c55b1-0ad0-4c37-80d1-931cdd2767aa-kube-api-access-vccdt podName:612c55b1-0ad0-4c37-80d1-931cdd2767aa nodeName:}" failed. No retries permitted until 2023-07-17 19:04:12.666793214 +0000 UTC m=+13.220721287 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vccdt" (UniqueName: "kubernetes.io/projected/612c55b1-0ad0-4c37-80d1-931cdd2767aa-kube-api-access-vccdt") pod "kube-proxy-hzb9w" (UID: "612c55b1-0ad0-4c37-80d1-931cdd2767aa") : configmap "kube-root-ca.crt" not found
	Jul 17 19:04:12 multinode-549411 kubelet[1596]: E0717 19:04:12.167744    1596 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 17 19:04:12 multinode-549411 kubelet[1596]: E0717 19:04:12.167769    1596 projected.go:198] Error preparing data for projected volume kube-api-access-ks98c for pod kube-system/kindnet-zjw42: configmap "kube-root-ca.crt" not found
	Jul 17 19:04:12 multinode-549411 kubelet[1596]: E0717 19:04:12.167829    1596 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/145336b3-84d1-459d-985a-f030ea0d3789-kube-api-access-ks98c podName:145336b3-84d1-459d-985a-f030ea0d3789 nodeName:}" failed. No retries permitted until 2023-07-17 19:04:12.667810223 +0000 UTC m=+13.221738324 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ks98c" (UniqueName: "kubernetes.io/projected/145336b3-84d1-459d-985a-f030ea0d3789-kube-api-access-ks98c") pod "kindnet-zjw42" (UID: "145336b3-84d1-459d-985a-f030ea0d3789") : configmap "kube-root-ca.crt" not found
	Jul 17 19:04:12 multinode-549411 kubelet[1596]: W0717 19:04:12.963021    1596 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/45cef728eef070a7d16b710f7f2faee4f9d97e87c3d1ccb69b7e1c7b3c92a882/crio-c63ac61af43ad9ccf5930f5e322c3fd95ff234eaef6d5f615f9ab836520f08f3 WatchSource:0}: Error finding container c63ac61af43ad9ccf5930f5e322c3fd95ff234eaef6d5f615f9ab836520f08f3: Status 404 returned error can't find the container with id c63ac61af43ad9ccf5930f5e322c3fd95ff234eaef6d5f615f9ab836520f08f3
	Jul 17 19:04:12 multinode-549411 kubelet[1596]: W0717 19:04:12.965404    1596 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/45cef728eef070a7d16b710f7f2faee4f9d97e87c3d1ccb69b7e1c7b3c92a882/crio-ee373e9d75536a424b3741c6566ecbe00b52e468b525ad5f6e0cac24ca7df8e0 WatchSource:0}: Error finding container ee373e9d75536a424b3741c6566ecbe00b52e468b525ad5f6e0cac24ca7df8e0: Status 404 returned error can't find the container with id ee373e9d75536a424b3741c6566ecbe00b52e468b525ad5f6e0cac24ca7df8e0
	Jul 17 19:04:13 multinode-549411 kubelet[1596]: I0717 19:04:13.686674    1596 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-zjw42" podStartSLOduration=2.686622616 podCreationTimestamp="2023-07-17 19:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-17 19:04:13.675805706 +0000 UTC m=+14.229733789" watchObservedRunningTime="2023-07-17 19:04:13.686622616 +0000 UTC m=+14.240550698"
	Jul 17 19:04:13 multinode-549411 kubelet[1596]: I0717 19:04:13.686808    1596 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-hzb9w" podStartSLOduration=2.686779565 podCreationTimestamp="2023-07-17 19:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-17 19:04:13.686478697 +0000 UTC m=+14.240406780" watchObservedRunningTime="2023-07-17 19:04:13.686779565 +0000 UTC m=+14.240707647"
	Jul 17 19:04:44 multinode-549411 kubelet[1596]: I0717 19:04:44.482955    1596 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jul 17 19:04:44 multinode-549411 kubelet[1596]: I0717 19:04:44.506416    1596 topology_manager.go:212] "Topology Admit Handler"
	Jul 17 19:04:44 multinode-549411 kubelet[1596]: I0717 19:04:44.507760    1596 topology_manager.go:212] "Topology Admit Handler"
	Jul 17 19:04:44 multinode-549411 kubelet[1596]: I0717 19:04:44.607510    1596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/405bbed8-a7bb-484e-b391-fc1e85d55700-tmp\") pod \"storage-provisioner\" (UID: \"405bbed8-a7bb-484e-b391-fc1e85d55700\") " pod="kube-system/storage-provisioner"
	Jul 17 19:04:44 multinode-549411 kubelet[1596]: I0717 19:04:44.607562    1596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzggm\" (UniqueName: \"kubernetes.io/projected/405bbed8-a7bb-484e-b391-fc1e85d55700-kube-api-access-wzggm\") pod \"storage-provisioner\" (UID: \"405bbed8-a7bb-484e-b391-fc1e85d55700\") " pod="kube-system/storage-provisioner"
	Jul 17 19:04:44 multinode-549411 kubelet[1596]: I0717 19:04:44.607596    1596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqqmx\" (UniqueName: \"kubernetes.io/projected/0b962161-8aa7-48e3-bfab-c96b8fcdeb95-kube-api-access-tqqmx\") pod \"coredns-5d78c9869d-98dl8\" (UID: \"0b962161-8aa7-48e3-bfab-c96b8fcdeb95\") " pod="kube-system/coredns-5d78c9869d-98dl8"
	Jul 17 19:04:44 multinode-549411 kubelet[1596]: I0717 19:04:44.607768    1596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b962161-8aa7-48e3-bfab-c96b8fcdeb95-config-volume\") pod \"coredns-5d78c9869d-98dl8\" (UID: \"0b962161-8aa7-48e3-bfab-c96b8fcdeb95\") " pod="kube-system/coredns-5d78c9869d-98dl8"
	Jul 17 19:04:44 multinode-549411 kubelet[1596]: W0717 19:04:44.856906    1596 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/45cef728eef070a7d16b710f7f2faee4f9d97e87c3d1ccb69b7e1c7b3c92a882/crio-ddb1950a6caff39803f125f82d576e94aec1e85c8afdc58533081e49cb938cd7 WatchSource:0}: Error finding container ddb1950a6caff39803f125f82d576e94aec1e85c8afdc58533081e49cb938cd7: Status 404 returned error can't find the container with id ddb1950a6caff39803f125f82d576e94aec1e85c8afdc58533081e49cb938cd7
	Jul 17 19:04:44 multinode-549411 kubelet[1596]: W0717 19:04:44.857192    1596 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/45cef728eef070a7d16b710f7f2faee4f9d97e87c3d1ccb69b7e1c7b3c92a882/crio-dbe5638cb2e916abc8befe370c5de545c8130edcf65f90e8bea166191f174cfc WatchSource:0}: Error finding container dbe5638cb2e916abc8befe370c5de545c8130edcf65f90e8bea166191f174cfc: Status 404 returned error can't find the container with id dbe5638cb2e916abc8befe370c5de545c8130edcf65f90e8bea166191f174cfc
	Jul 17 19:04:45 multinode-549411 kubelet[1596]: I0717 19:04:45.734304    1596 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=32.734253426 podCreationTimestamp="2023-07-17 19:04:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-17 19:04:45.733808703 +0000 UTC m=+46.287736786" watchObservedRunningTime="2023-07-17 19:04:45.734253426 +0000 UTC m=+46.288181508"
	Jul 17 19:04:45 multinode-549411 kubelet[1596]: I0717 19:04:45.744990    1596 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-98dl8" podStartSLOduration=33.744939229 podCreationTimestamp="2023-07-17 19:04:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-17 19:04:45.744841134 +0000 UTC m=+46.298769217" watchObservedRunningTime="2023-07-17 19:04:45.744939229 +0000 UTC m=+46.298867306"
	Jul 17 19:05:05 multinode-549411 kubelet[1596]: I0717 19:05:05.566748    1596 topology_manager.go:212] "Topology Admit Handler"
	Jul 17 19:05:05 multinode-549411 kubelet[1596]: I0717 19:05:05.730431    1596 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s776c\" (UniqueName: \"kubernetes.io/projected/51cd4c79-659e-4f2d-95e8-2c4d62a68d37-kube-api-access-s776c\") pod \"busybox-67b7f59bb-rww5s\" (UID: \"51cd4c79-659e-4f2d-95e8-2c4d62a68d37\") " pod="default/busybox-67b7f59bb-rww5s"
	Jul 17 19:05:05 multinode-549411 kubelet[1596]: W0717 19:05:05.904763    1596 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/45cef728eef070a7d16b710f7f2faee4f9d97e87c3d1ccb69b7e1c7b3c92a882/crio-c0e0e28e4a661d4f61f8246be0dc7da7d4d93741b9edca47584a36eaeb30643d WatchSource:0}: Error finding container c0e0e28e4a661d4f61f8246be0dc7da7d4d93741b9edca47584a36eaeb30643d: Status 404 returned error can't find the container with id c0e0e28e4a661d4f61f8246be0dc7da7d4d93741b9edca47584a36eaeb30643d
	Jul 17 19:05:06 multinode-549411 kubelet[1596]: I0717 19:05:06.773032    1596 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-67b7f59bb-rww5s" podStartSLOduration=1.030288401 podCreationTimestamp="2023-07-17 19:05:05 +0000 UTC" firstStartedPulling="2023-07-17 19:05:05.909182635 +0000 UTC m=+66.463110709" lastFinishedPulling="2023-07-17 19:05:06.651876674 +0000 UTC m=+67.205804749" observedRunningTime="2023-07-17 19:05:06.772751055 +0000 UTC m=+67.326679140" watchObservedRunningTime="2023-07-17 19:05:06.772982441 +0000 UTC m=+67.326910524"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-549411 -n multinode-549411
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-549411 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.23s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (83.34s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.9.0.2735511663.exe start -p running-upgrade-383497 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:132: (dbg) Done: /tmp/minikube-v1.9.0.2735511663.exe start -p running-upgrade-383497 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m17.288562315s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-383497 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:142: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-383497 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (2.867081367s)

                                                
                                                
-- stdout --
	* [running-upgrade-383497] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16890
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16890-138069/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-138069/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-383497 in cluster running-upgrade-383497
	* Pulling base image ...
	* Updating the running docker "running-upgrade-383497" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 19:15:52.402038  301015 out.go:296] Setting OutFile to fd 1 ...
	I0717 19:15:52.403619  301015 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:15:52.403664  301015 out.go:309] Setting ErrFile to fd 2...
	I0717 19:15:52.403689  301015 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:15:52.404437  301015 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-138069/.minikube/bin
	I0717 19:15:52.405171  301015 out.go:303] Setting JSON to false
	I0717 19:15:52.406954  301015 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":14303,"bootTime":1689607049,"procs":820,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 19:15:52.407050  301015 start.go:138] virtualization: kvm guest
	I0717 19:15:52.409635  301015 out.go:177] * [running-upgrade-383497] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 19:15:52.412074  301015 notify.go:220] Checking for updates...
	I0717 19:15:52.412112  301015 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 19:15:52.413955  301015 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:15:52.415653  301015 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-138069/kubeconfig
	I0717 19:15:52.417252  301015 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-138069/.minikube
	I0717 19:15:52.419201  301015 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 19:15:52.420941  301015 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 19:15:52.423103  301015 config.go:182] Loaded profile config "running-upgrade-383497": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0717 19:15:52.423130  301015 start_flags.go:695] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 19:15:52.425560  301015 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0717 19:15:52.427336  301015 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 19:15:52.452367  301015 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 19:15:52.452491  301015 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 19:15:52.522303  301015 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:true NGoroutines:71 SystemTime:2023-07-17 19:15:52.511360622 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 19:15:52.522459  301015 docker.go:294] overlay module found
	I0717 19:15:52.524946  301015 out.go:177] * Using the docker driver based on existing profile
	I0717 19:15:52.526954  301015 start.go:298] selected driver: docker
	I0717 19:15:52.526979  301015 start.go:880] validating driver "docker" against &{Name:running-upgrade-383497 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-383497 Namespace: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:15:52.527117  301015 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 19:15:52.528384  301015 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 19:15:52.589284  301015 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:true NGoroutines:71 SystemTime:2023-07-17 19:15:52.579447019 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 19:15:52.589661  301015 cni.go:84] Creating CNI manager for ""
	I0717 19:15:52.589684  301015 cni.go:130] EnableDefaultCNI is true, recommending bridge
	I0717 19:15:52.589693  301015 start_flags.go:319] config:
	{Name:running-upgrade-383497 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-383497 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:15:52.592055  301015 out.go:177] * Starting control plane node running-upgrade-383497 in cluster running-upgrade-383497
	I0717 19:15:52.593612  301015 cache.go:122] Beginning downloading kic base image for docker with crio
	I0717 19:15:52.595220  301015 out.go:177] * Pulling base image ...
	I0717 19:15:52.596788  301015 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0717 19:15:52.596886  301015 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 19:15:52.615209  301015 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 19:15:52.615242  301015 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	W0717 19:15:52.623128  301015 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0717 19:15:52.623295  301015 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/running-upgrade-383497/config.json ...
	I0717 19:15:52.623433  301015 cache.go:107] acquiring lock: {Name:mk0626aa4c32952c38431bc57a3be6531c251df4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:15:52.623506  301015 cache.go:107] acquiring lock: {Name:mkdd7c36248d43a8ed2da602bcfcaf77d0ba431f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:15:52.623534  301015 cache.go:107] acquiring lock: {Name:mk99778cf263ded15bef16af944ba7e5e1c2f1a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:15:52.623542  301015 cache.go:107] acquiring lock: {Name:mkba162517b3c0d46459927d0c5ebda7dc236b77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:15:52.623434  301015 cache.go:107] acquiring lock: {Name:mkd212c5db1f99d1e2779ee03e5908ac3123cf12 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:15:52.623621  301015 cache.go:107] acquiring lock: {Name:mkd71aeba8a963da4395dc7d2ffea751af49e924 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:15:52.623711  301015 cache.go:195] Successfully downloaded all kic artifacts
	I0717 19:15:52.623751  301015 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.0
	I0717 19:15:52.623759  301015 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0717 19:15:52.623775  301015 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.0
	I0717 19:15:52.623473  301015 cache.go:107] acquiring lock: {Name:mkd892d265197bba9d74c85569bdbefabd7a9143 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:15:52.623848  301015 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0717 19:15:52.623770  301015 start.go:365] acquiring machines lock for running-upgrade-383497: {Name:mk1192f9cfe80aaa2df37104e7b51b0498107cd2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:15:52.623897  301015 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 19:15:52.623956  301015 start.go:369] acquired machines lock for "running-upgrade-383497" in 54.972µs
	I0717 19:15:52.624020  301015 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:15:52.623708  301015 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.0
	I0717 19:15:52.624035  301015 fix.go:54] fixHost starting: m01
	I0717 19:15:52.623776  301015 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.0
	I0717 19:15:52.623432  301015 cache.go:107] acquiring lock: {Name:mkf1a1130734b2d756a0657ef9722999f48d6c2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:15:52.624277  301015 cache.go:115] /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0717 19:15:52.624298  301015 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 869.53µs
	I0717 19:15:52.624312  301015 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0717 19:15:52.624354  301015 cli_runner.go:164] Run: docker container inspect running-upgrade-383497 --format={{.State.Status}}
	I0717 19:15:52.625078  301015 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0717 19:15:52.625078  301015 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.0
	I0717 19:15:52.625097  301015 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.0
	I0717 19:15:52.625109  301015 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0717 19:15:52.625078  301015 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.0
	I0717 19:15:52.625249  301015 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 19:15:52.625337  301015 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.0
	I0717 19:15:52.650743  301015 fix.go:102] recreateIfNeeded on running-upgrade-383497: state=Running err=<nil>
	W0717 19:15:52.650785  301015 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 19:15:52.653613  301015 out.go:177] * Updating the running docker "running-upgrade-383497" container ...
	I0717 19:15:52.655112  301015 machine.go:88] provisioning docker machine ...
	I0717 19:15:52.655145  301015 ubuntu.go:169] provisioning hostname "running-upgrade-383497"
	I0717 19:15:52.655204  301015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-383497
	I0717 19:15:52.672199  301015 main.go:141] libmachine: Using SSH client type: native
	I0717 19:15:52.672633  301015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32928 <nil> <nil>}
	I0717 19:15:52.672648  301015 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-383497 && echo "running-upgrade-383497" | sudo tee /etc/hostname
	I0717 19:15:52.792634  301015 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-383497
	
	I0717 19:15:52.792702  301015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-383497
	I0717 19:15:52.809280  301015 main.go:141] libmachine: Using SSH client type: native
	I0717 19:15:52.809747  301015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32928 <nil> <nil>}
	I0717 19:15:52.809780  301015 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-383497' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-383497/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-383497' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:15:52.819138  301015 cache.go:162] opening:  /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0717 19:15:52.822608  301015 cache.go:162] opening:  /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0
	I0717 19:15:52.825668  301015 cache.go:162] opening:  /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0
	I0717 19:15:52.826894  301015 cache.go:162] opening:  /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0717 19:15:52.840124  301015 cache.go:162] opening:  /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0
	I0717 19:15:52.846427  301015 cache.go:162] opening:  /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0
	I0717 19:15:52.850013  301015 cache.go:162] opening:  /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0717 19:15:52.916516  301015 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:15:52.916552  301015 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16890-138069/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-138069/.minikube}
	I0717 19:15:52.916590  301015 ubuntu.go:177] setting up certificates
	I0717 19:15:52.916606  301015 provision.go:83] configureAuth start
	I0717 19:15:52.916737  301015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-383497
	I0717 19:15:52.931005  301015 cache.go:157] /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I0717 19:15:52.931036  301015 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 307.58423ms
	I0717 19:15:52.931053  301015 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I0717 19:15:52.938425  301015 provision.go:138] copyHostCerts
	I0717 19:15:52.938497  301015 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-138069/.minikube/ca.pem, removing ...
	I0717 19:15:52.938507  301015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-138069/.minikube/ca.pem
	I0717 19:15:52.938585  301015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-138069/.minikube/ca.pem (1078 bytes)
	I0717 19:15:52.938783  301015 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-138069/.minikube/cert.pem, removing ...
	I0717 19:15:52.938794  301015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-138069/.minikube/cert.pem
	I0717 19:15:52.938911  301015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-138069/.minikube/cert.pem (1123 bytes)
	I0717 19:15:52.939030  301015 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-138069/.minikube/key.pem, removing ...
	I0717 19:15:52.939042  301015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-138069/.minikube/key.pem
	I0717 19:15:52.939078  301015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-138069/.minikube/key.pem (1675 bytes)
	I0717 19:15:52.939144  301015 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-383497 san=[172.17.0.3 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-383497]
	I0717 19:15:53.298688  301015 cache.go:157] /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I0717 19:15:53.298718  301015 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 675.146774ms
	I0717 19:15:53.298731  301015 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I0717 19:15:53.432840  301015 provision.go:172] copyRemoteCerts
	I0717 19:15:53.432923  301015 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:15:53.432981  301015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-383497
	I0717 19:15:53.460100  301015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32928 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/running-upgrade-383497/id_rsa Username:docker}
	I0717 19:15:53.548325  301015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:15:53.568525  301015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0717 19:15:53.589642  301015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 19:15:53.613699  301015 provision.go:86] duration metric: configureAuth took 697.057534ms
	I0717 19:15:53.613737  301015 ubuntu.go:193] setting minikube options for container-runtime
	I0717 19:15:53.613965  301015 config.go:182] Loaded profile config "running-upgrade-383497": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0717 19:15:53.614118  301015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-383497
	I0717 19:15:53.639272  301015 main.go:141] libmachine: Using SSH client type: native
	I0717 19:15:53.639738  301015 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32928 <nil> <nil>}
	I0717 19:15:53.639764  301015 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:15:53.850460  301015 cache.go:157] /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I0717 19:15:53.850497  301015 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 1.226998919s
	I0717 19:15:53.850516  301015 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I0717 19:15:54.049276  301015 cache.go:157] /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I0717 19:15:54.049355  301015 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 1.425936349s
	I0717 19:15:54.049381  301015 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I0717 19:15:54.187371  301015 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:15:54.187413  301015 machine.go:91] provisioned docker machine in 1.532281655s
	I0717 19:15:54.187425  301015 start.go:300] post-start starting for "running-upgrade-383497" (driver="docker")
	I0717 19:15:54.187439  301015 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:15:54.187541  301015 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:15:54.187601  301015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-383497
	I0717 19:15:54.190467  301015 cache.go:157] /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I0717 19:15:54.190492  301015 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 1.567066191s
	I0717 19:15:54.190509  301015 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I0717 19:15:54.214751  301015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32928 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/running-upgrade-383497/id_rsa Username:docker}
	I0717 19:15:54.308735  301015 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:15:54.311932  301015 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 19:15:54.312041  301015 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 19:15:54.312069  301015 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 19:15:54.312112  301015 info.go:137] Remote host: Ubuntu 19.10
	I0717 19:15:54.312144  301015 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-138069/.minikube/addons for local assets ...
	I0717 19:15:54.312285  301015 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-138069/.minikube/files for local assets ...
	I0717 19:15:54.312429  301015 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem -> 1448222.pem in /etc/ssl/certs
	I0717 19:15:54.312594  301015 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:15:54.322631  301015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem --> /etc/ssl/certs/1448222.pem (1708 bytes)
	I0717 19:15:54.343068  301015 start.go:303] post-start completed in 155.623149ms
	I0717 19:15:54.343157  301015 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 19:15:54.343217  301015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-383497
	I0717 19:15:54.373042  301015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32928 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/running-upgrade-383497/id_rsa Username:docker}
	I0717 19:15:54.460698  301015 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 19:15:54.465808  301015 fix.go:56] fixHost completed within 1.841772922s
	I0717 19:15:54.465833  301015 start.go:83] releasing machines lock for "running-upgrade-383497", held for 1.841823811s
	I0717 19:15:54.465906  301015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-383497
	I0717 19:15:54.485673  301015 ssh_runner.go:195] Run: cat /version.json
	I0717 19:15:54.485729  301015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-383497
	I0717 19:15:54.485749  301015 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:15:54.485822  301015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-383497
	I0717 19:15:54.506479  301015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32928 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/running-upgrade-383497/id_rsa Username:docker}
	I0717 19:15:54.508336  301015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32928 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/running-upgrade-383497/id_rsa Username:docker}
	I0717 19:15:54.646214  301015 cache.go:157] /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	W0717 19:15:54.646226  301015 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0717 19:15:54.646252  301015 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 2.022758756s
	I0717 19:15:54.646266  301015 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0717 19:15:54.687042  301015 cache.go:157] /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I0717 19:15:54.687078  301015 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 2.063546473s
	I0717 19:15:54.687095  301015 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I0717 19:15:54.687118  301015 cache.go:87] Successfully saved all images to host disk.
	I0717 19:15:54.687179  301015 ssh_runner.go:195] Run: systemctl --version
	I0717 19:15:54.691446  301015 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:15:54.745026  301015 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 19:15:54.749704  301015 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:15:54.767352  301015 cni.go:227] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0717 19:15:54.767468  301015 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:15:54.799612  301015 cni.go:268] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:15:54.799641  301015 start.go:469] detecting cgroup driver to use...
	I0717 19:15:54.799680  301015 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 19:15:54.799734  301015 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:15:54.832106  301015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:15:54.845969  301015 docker.go:196] disabling cri-docker service (if available) ...
	I0717 19:15:54.846035  301015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:15:54.855207  301015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:15:54.865135  301015 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0717 19:15:54.874721  301015 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0717 19:15:54.874778  301015 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:15:54.956651  301015 docker.go:212] disabling docker service ...
	I0717 19:15:54.956735  301015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:15:54.968179  301015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:15:54.977865  301015 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:15:55.075129  301015 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:15:55.168530  301015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:15:55.179689  301015 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:15:55.194353  301015 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 19:15:55.194423  301015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:15:55.210872  301015 out.go:177] 
	W0717 19:15:55.212630  301015 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0717 19:15:55.212657  301015 out.go:239] * 
	* 
	W0717 19:15:55.213740  301015 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 19:15:55.216051  301015 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:144: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-383497 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-07-17 19:15:55.238404486 +0000 UTC m=+1833.859070365
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-383497
helpers_test.go:235: (dbg) docker inspect running-upgrade-383497:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9b1642bfedce2a80392edabf60e2cde7e85495dd1e36c8017ea4855ad424ae68",
	        "Created": "2023-07-17T19:14:44.114646699Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 281492,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T19:14:44.605305371Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/9b1642bfedce2a80392edabf60e2cde7e85495dd1e36c8017ea4855ad424ae68/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9b1642bfedce2a80392edabf60e2cde7e85495dd1e36c8017ea4855ad424ae68/hostname",
	        "HostsPath": "/var/lib/docker/containers/9b1642bfedce2a80392edabf60e2cde7e85495dd1e36c8017ea4855ad424ae68/hosts",
	        "LogPath": "/var/lib/docker/containers/9b1642bfedce2a80392edabf60e2cde7e85495dd1e36c8017ea4855ad424ae68/9b1642bfedce2a80392edabf60e2cde7e85495dd1e36c8017ea4855ad424ae68-json.log",
	        "Name": "/running-upgrade-383497",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-383497:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/85fd3f714b0424af3297466f195ac7ad0431f01801238c5056df353c15b1d0ee-init/diff:/var/lib/docker/overlay2/7e2a2248801f7af8668c854f4ee8d762b5434f2585fe8326278a029cbcb0686a/diff:/var/lib/docker/overlay2/4e7a799977bd02b85fc5cda4aa520b9f4391e1b7574f458b1d60aec3480577e4/diff:/var/lib/docker/overlay2/33231ab33e9e92f45a21c8d46e590e8c990d36569692fbaf8214cf2d79ee731e/diff:/var/lib/docker/overlay2/726f58c72e76f60c68bf66b26ff26e0d844cafc33a3947a89002e2a3c812d493/diff:/var/lib/docker/overlay2/fc69f6337bab71d00a3db3d3a0da86cd43ababf6be9ede0b8a35907d67d8c373/diff:/var/lib/docker/overlay2/a970e834f0e68f799b023b60dd1697d1de0bf0f5df9f5234a7eed680909b9e76/diff:/var/lib/docker/overlay2/a8892055034eaf89fe96b1806e6ac19c81452c0afa1bdc3c5e4c961fb3ac6af9/diff:/var/lib/docker/overlay2/cf6da31da4f0b99e8f000cc2bfe98d71a2015d4f86b2c794c631d1ae482616e1/diff:/var/lib/docker/overlay2/ffa193f796f3a03ccb735d8398ac069877dc375fc4939f4c2742a6e323eb9f5c/diff:/var/lib/docker/overlay2/1cbd0d
426fccf1c8376750f2e6466306d0c0658677ad78961d38f73349b1d43b/diff:/var/lib/docker/overlay2/36204a0ec951ad1c2b573b5c5ca01302b5b3eb5f6f4237395cae5fa4d8d61aee/diff:/var/lib/docker/overlay2/50ab6d3ab9062806d9a26b7c31fd295a234b80aa565901a39fec3e2e88fc6421/diff:/var/lib/docker/overlay2/4427e9104d0a7415e71bd79d78390219777abb2072e381b792fbc60eba0c7c14/diff:/var/lib/docker/overlay2/5cfe5d81ddf137709de8c4f1bd12e5c28dde5442db0c2415d4861d634df4b31b/diff:/var/lib/docker/overlay2/c70a420a8ce42b179a186821f1e6236a67504ba798260d7eac0cccc7c447edff/diff:/var/lib/docker/overlay2/d58a01b7e1b997af62e1dc8094df530d07f817b7e4afca7668a9d4d8bdcbe43e/diff:/var/lib/docker/overlay2/0ed1541777c39a263ed2c250996c83eda5379c77f6b0faa1bddc45a4a481fb3b/diff:/var/lib/docker/overlay2/394aeea0fd3376059cb1f9e5f4fdcaa2ccaba9ae75b59f74c14863ca3024e19a/diff:/var/lib/docker/overlay2/a2bd5b2cace8a364a1e9a94af4dca3ff368dea0332e8d6117096c024c22eb431/diff:/var/lib/docker/overlay2/0e0fce867e096b8d7ff363201548e682ae87e79eba01b5f614f56b387322c38a/diff:/var/lib/d
ocker/overlay2/f8e46915b46b584551debbb653020a2428191300caff3bb0b9303431a197f1d1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/85fd3f714b0424af3297466f195ac7ad0431f01801238c5056df353c15b1d0ee/merged",
	                "UpperDir": "/var/lib/docker/overlay2/85fd3f714b0424af3297466f195ac7ad0431f01801238c5056df353c15b1d0ee/diff",
	                "WorkDir": "/var/lib/docker/overlay2/85fd3f714b0424af3297466f195ac7ad0431f01801238c5056df353c15b1d0ee/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-383497",
	                "Source": "/var/lib/docker/volumes/running-upgrade-383497/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-383497",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-383497",
	                "name.minikube.sigs.k8s.io": "running-upgrade-383497",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "354f39d72eb48ab81bde0deabfb223b790fdf427b60407c68c704731cf38ce92",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32928"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32927"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32926"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/354f39d72eb4",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "0502d034084319335ef7525c0a557cac797e7c7b7ea5f37d9c09ec13a6548031",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.3",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:03",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "38601029ee50800ee6d367c50cc35e5b5b8a9c69b006e7a19da80e0a0e29b84f",
	                    "EndpointID": "0502d034084319335ef7525c0a557cac797e7c7b7ea5f37d9c09ec13a6548031",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.3",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:03",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-383497 -n running-upgrade-383497
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-383497 -n running-upgrade-383497: exit status 4 (340.078939ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 19:15:55.559021  302054 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-383497" does not appear in /home/jenkins/minikube-integration/16890-138069/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-383497" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-383497" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-383497
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-383497: (2.37840989s)
--- FAIL: TestRunningBinaryUpgrade (83.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (106.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.9.0.1017498436.exe start -p stopped-upgrade-435958 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.9.0.1017498436.exe start -p stopped-upgrade-435958 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m27.159688261s)
version_upgrade_test.go:204: (dbg) Run:  /tmp/minikube-v1.9.0.1017498436.exe -p stopped-upgrade-435958 stop
version_upgrade_test.go:204: (dbg) Done: /tmp/minikube-v1.9.0.1017498436.exe -p stopped-upgrade-435958 stop: (10.928350307s)
version_upgrade_test.go:210: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-435958 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:210: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-435958 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (8.610253207s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-435958] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16890
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16890-138069/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-138069/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-435958 in cluster stopped-upgrade-435958
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-435958" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 19:15:56.638521  302756 out.go:296] Setting OutFile to fd 1 ...
	I0717 19:15:56.638789  302756 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:15:56.638823  302756 out.go:309] Setting ErrFile to fd 2...
	I0717 19:15:56.638840  302756 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:15:56.639097  302756 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-138069/.minikube/bin
	I0717 19:15:56.639719  302756 out.go:303] Setting JSON to false
	I0717 19:15:56.641757  302756 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":14308,"bootTime":1689607049,"procs":925,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 19:15:56.641876  302756 start.go:138] virtualization: kvm guest
	I0717 19:15:56.645832  302756 out.go:177] * [stopped-upgrade-435958] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 19:15:56.647763  302756 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 19:15:56.647817  302756 notify.go:220] Checking for updates...
	I0717 19:15:56.651455  302756 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:15:56.653082  302756 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-138069/kubeconfig
	I0717 19:15:56.654667  302756 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-138069/.minikube
	I0717 19:15:56.656320  302756 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 19:15:56.657871  302756 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 19:15:56.659943  302756 config.go:182] Loaded profile config "stopped-upgrade-435958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0717 19:15:56.659990  302756 start_flags.go:695] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 19:15:56.662228  302756 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0717 19:15:56.663929  302756 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 19:15:56.689323  302756 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 19:15:56.689416  302756 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 19:15:56.776479  302756 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:true NGoroutines:62 SystemTime:2023-07-17 19:15:56.763458108 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 19:15:56.776637  302756 docker.go:294] overlay module found
	I0717 19:15:56.779517  302756 out.go:177] * Using the docker driver based on existing profile
	I0717 19:15:56.781213  302756 start.go:298] selected driver: docker
	I0717 19:15:56.781232  302756 start.go:880] validating driver "docker" against &{Name:stopped-upgrade-435958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-435958 Namespace: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:15:56.781370  302756 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 19:15:56.782337  302756 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 19:15:56.871176  302756 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:true NGoroutines:70 SystemTime:2023-07-17 19:15:56.860739491 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 19:15:56.871542  302756 cni.go:84] Creating CNI manager for ""
	I0717 19:15:56.871558  302756 cni.go:130] EnableDefaultCNI is true, recommending bridge
	I0717 19:15:56.871568  302756 start_flags.go:319] config:
	{Name:stopped-upgrade-435958 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-435958 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:15:56.875849  302756 out.go:177] * Starting control plane node stopped-upgrade-435958 in cluster stopped-upgrade-435958
	I0717 19:15:56.877324  302756 cache.go:122] Beginning downloading kic base image for docker with crio
	I0717 19:15:56.878614  302756 out.go:177] * Pulling base image ...
	I0717 19:15:56.879989  302756 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0717 19:15:56.880016  302756 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	W0717 19:15:56.902688  302756 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0717 19:15:56.902890  302756 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/stopped-upgrade-435958/config.json ...
	I0717 19:15:56.903297  302756 cache.go:107] acquiring lock: {Name:mkf1a1130734b2d756a0657ef9722999f48d6c2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:15:56.903455  302756 cache.go:107] acquiring lock: {Name:mk99778cf263ded15bef16af944ba7e5e1c2f1a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:15:56.903497  302756 cache.go:107] acquiring lock: {Name:mk0626aa4c32952c38431bc57a3be6531c251df4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:15:56.903540  302756 cache.go:107] acquiring lock: {Name:mkba162517b3c0d46459927d0c5ebda7dc236b77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:15:56.903575  302756 cache.go:115] /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I0717 19:15:56.903589  302756 cache.go:115] /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I0717 19:15:56.903589  302756 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 50.985µs
	I0717 19:15:56.903604  302756 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I0717 19:15:56.903600  302756 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 116.902µs
	I0717 19:15:56.903614  302756 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I0717 19:15:56.903526  302756 cache.go:115] /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I0717 19:15:56.903617  302756 cache.go:107] acquiring lock: {Name:mkd212c5db1f99d1e2779ee03e5908ac3123cf12 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:15:56.903634  302756 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 185.851µs
	I0717 19:15:56.903643  302756 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I0717 19:15:56.903649  302756 cache.go:115] /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I0717 19:15:56.903655  302756 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 39.807µs
	I0717 19:15:56.903663  302756 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I0717 19:15:56.903659  302756 cache.go:107] acquiring lock: {Name:mkdd7c36248d43a8ed2da602bcfcaf77d0ba431f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:15:56.903675  302756 cache.go:107] acquiring lock: {Name:mkd892d265197bba9d74c85569bdbefabd7a9143 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:15:56.903695  302756 cache.go:115] /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0717 19:15:56.903701  302756 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 45.581µs
	I0717 19:15:56.903712  302756 cache.go:115] /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I0717 19:15:56.903717  302756 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0717 19:15:56.903720  302756 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 46.172µs
	I0717 19:15:56.903733  302756 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I0717 19:15:56.903731  302756 cache.go:107] acquiring lock: {Name:mkd71aeba8a963da4395dc7d2ffea751af49e924 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:15:56.903756  302756 cache.go:115] /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0717 19:15:56.903763  302756 cache.go:115] /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I0717 19:15:56.903765  302756 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 478.473µs
	I0717 19:15:56.903769  302756 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 40.406µs
	I0717 19:15:56.903781  302756 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0717 19:15:56.903783  302756 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I0717 19:15:56.903790  302756 cache.go:87] Successfully saved all images to host disk.
	I0717 19:15:56.913692  302756 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 19:15:56.913721  302756 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0717 19:15:56.913744  302756 cache.go:195] Successfully downloaded all kic artifacts
	I0717 19:15:56.913794  302756 start.go:365] acquiring machines lock for stopped-upgrade-435958: {Name:mk70551d540c5c00b5bbb8e1accdb756036c3f83 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:15:56.913920  302756 start.go:369] acquired machines lock for "stopped-upgrade-435958" in 96.603µs
	I0717 19:15:56.913946  302756 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:15:56.913962  302756 fix.go:54] fixHost starting: m01
	I0717 19:15:56.914254  302756 cli_runner.go:164] Run: docker container inspect stopped-upgrade-435958 --format={{.State.Status}}
	I0717 19:15:56.935829  302756 fix.go:102] recreateIfNeeded on stopped-upgrade-435958: state=Stopped err=<nil>
	W0717 19:15:56.935895  302756 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 19:15:56.938213  302756 out.go:177] * Restarting existing docker container for "stopped-upgrade-435958" ...
	I0717 19:15:56.939915  302756 cli_runner.go:164] Run: docker start stopped-upgrade-435958
	I0717 19:15:57.207908  302756 cli_runner.go:164] Run: docker container inspect stopped-upgrade-435958 --format={{.State.Status}}
	I0717 19:15:57.227902  302756 kic.go:426] container "stopped-upgrade-435958" state is running.
	I0717 19:15:57.228364  302756 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-435958
	I0717 19:15:57.245713  302756 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/stopped-upgrade-435958/config.json ...
	I0717 19:15:57.245962  302756 machine.go:88] provisioning docker machine ...
	I0717 19:15:57.245989  302756 ubuntu.go:169] provisioning hostname "stopped-upgrade-435958"
	I0717 19:15:57.246040  302756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-435958
	I0717 19:15:57.263858  302756 main.go:141] libmachine: Using SSH client type: native
	I0717 19:15:57.264434  302756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32951 <nil> <nil>}
	I0717 19:15:57.264457  302756 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-435958 && echo "stopped-upgrade-435958" | sudo tee /etc/hostname
	I0717 19:15:57.265077  302756 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33518->127.0.0.1:32951: read: connection reset by peer
	I0717 19:16:00.384887  302756 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-435958
	
	I0717 19:16:00.384977  302756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-435958
	I0717 19:16:00.407525  302756 main.go:141] libmachine: Using SSH client type: native
	I0717 19:16:00.407932  302756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32951 <nil> <nil>}
	I0717 19:16:00.407951  302756 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-435958' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-435958/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-435958' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:16:00.516171  302756 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:16:00.516222  302756 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16890-138069/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-138069/.minikube}
	I0717 19:16:00.516287  302756 ubuntu.go:177] setting up certificates
	I0717 19:16:00.516301  302756 provision.go:83] configureAuth start
	I0717 19:16:00.516376  302756 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-435958
	I0717 19:16:00.535634  302756 provision.go:138] copyHostCerts
	I0717 19:16:00.535712  302756 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-138069/.minikube/ca.pem, removing ...
	I0717 19:16:00.535729  302756 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-138069/.minikube/ca.pem
	I0717 19:16:00.535805  302756 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-138069/.minikube/ca.pem (1078 bytes)
	I0717 19:16:00.535932  302756 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-138069/.minikube/cert.pem, removing ...
	I0717 19:16:00.535945  302756 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-138069/.minikube/cert.pem
	I0717 19:16:00.536011  302756 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-138069/.minikube/cert.pem (1123 bytes)
	I0717 19:16:00.536098  302756 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-138069/.minikube/key.pem, removing ...
	I0717 19:16:00.536110  302756 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-138069/.minikube/key.pem
	I0717 19:16:00.536149  302756 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-138069/.minikube/key.pem (1675 bytes)
	I0717 19:16:00.536214  302756 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-435958 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-435958]
	I0717 19:16:00.663818  302756 provision.go:172] copyRemoteCerts
	I0717 19:16:00.663874  302756 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:16:00.663907  302756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-435958
	I0717 19:16:00.681517  302756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32951 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/stopped-upgrade-435958/id_rsa Username:docker}
	I0717 19:16:00.771965  302756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:16:00.790653  302756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:16:00.807986  302756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 19:16:00.827096  302756 provision.go:86] duration metric: configureAuth took 310.775251ms
	I0717 19:16:00.827129  302756 ubuntu.go:193] setting minikube options for container-runtime
	I0717 19:16:00.827318  302756 config.go:182] Loaded profile config "stopped-upgrade-435958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0717 19:16:00.827443  302756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-435958
	I0717 19:16:00.846456  302756 main.go:141] libmachine: Using SSH client type: native
	I0717 19:16:00.847050  302756 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32951 <nil> <nil>}
	I0717 19:16:00.847078  302756 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:16:04.064104  302756 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:16:04.064133  302756 machine.go:91] provisioned docker machine in 6.81815439s
	I0717 19:16:04.064144  302756 start.go:300] post-start starting for "stopped-upgrade-435958" (driver="docker")
	I0717 19:16:04.064160  302756 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:16:04.064222  302756 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:16:04.064264  302756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-435958
	I0717 19:16:04.094614  302756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32951 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/stopped-upgrade-435958/id_rsa Username:docker}
	I0717 19:16:04.215601  302756 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:16:04.218964  302756 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 19:16:04.218993  302756 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 19:16:04.219005  302756 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 19:16:04.219014  302756 info.go:137] Remote host: Ubuntu 19.10
	I0717 19:16:04.219026  302756 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-138069/.minikube/addons for local assets ...
	I0717 19:16:04.219091  302756 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-138069/.minikube/files for local assets ...
	I0717 19:16:04.219181  302756 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem -> 1448222.pem in /etc/ssl/certs
	I0717 19:16:04.219295  302756 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:16:04.248413  302756 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem --> /etc/ssl/certs/1448222.pem (1708 bytes)
	I0717 19:16:04.278748  302756 start.go:303] post-start completed in 214.586351ms
	I0717 19:16:04.278829  302756 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 19:16:04.278887  302756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-435958
	I0717 19:16:04.329751  302756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32951 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/stopped-upgrade-435958/id_rsa Username:docker}
	I0717 19:16:04.428058  302756 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 19:16:04.437742  302756 fix.go:56] fixHost completed within 7.523774782s
	I0717 19:16:04.437782  302756 start.go:83] releasing machines lock for "stopped-upgrade-435958", held for 7.523842469s
	I0717 19:16:04.437852  302756 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-435958
	I0717 19:16:04.459778  302756 ssh_runner.go:195] Run: cat /version.json
	I0717 19:16:04.459796  302756 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:16:04.459840  302756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-435958
	I0717 19:16:04.459862  302756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-435958
	I0717 19:16:04.485640  302756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32951 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/stopped-upgrade-435958/id_rsa Username:docker}
	I0717 19:16:04.498869  302756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32951 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/stopped-upgrade-435958/id_rsa Username:docker}
	W0717 19:16:04.621780  302756 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0717 19:16:04.621880  302756 ssh_runner.go:195] Run: systemctl --version
	I0717 19:16:04.627905  302756 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:16:04.684305  302756 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 19:16:04.690065  302756 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:16:04.711402  302756 cni.go:227] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0717 19:16:04.711540  302756 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:16:04.768804  302756 cni.go:268] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 19:16:04.768828  302756 start.go:469] detecting cgroup driver to use...
	I0717 19:16:04.768863  302756 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 19:16:04.768915  302756 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:16:04.799455  302756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:16:04.811906  302756 docker.go:196] disabling cri-docker service (if available) ...
	I0717 19:16:04.811959  302756 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:16:04.823889  302756 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:16:04.838284  302756 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0717 19:16:04.849985  302756 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0717 19:16:04.850041  302756 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:16:04.939379  302756 docker.go:212] disabling docker service ...
	I0717 19:16:04.939446  302756 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:16:04.949985  302756 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:16:04.962140  302756 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:16:05.044793  302756 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:16:05.152033  302756 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:16:05.164499  302756 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:16:05.185120  302756 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 19:16:05.185174  302756 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:16:05.196449  302756 out.go:177] 
	W0717 19:16:05.198534  302756 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0717 19:16:05.198555  302756 out.go:239] * 
	* 
	W0717 19:16:05.199537  302756 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 19:16:05.201045  302756 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:212: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-435958 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (106.70s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (45.59s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-795576 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-795576 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.66672177s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-795576] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16890
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16890-138069/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-138069/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node pause-795576 in cluster pause-795576
	* Pulling base image ...
	* Updating the running docker "pause-795576" container ...
	* Preparing Kubernetes v1.27.3 on CRI-O 1.24.6 ...
	* Configuring CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-795576" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 19:16:57.507384  318888 out.go:296] Setting OutFile to fd 1 ...
	I0717 19:16:57.507520  318888 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:16:57.507529  318888 out.go:309] Setting ErrFile to fd 2...
	I0717 19:16:57.507533  318888 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:16:57.507764  318888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-138069/.minikube/bin
	I0717 19:16:57.508419  318888 out.go:303] Setting JSON to false
	I0717 19:16:57.510314  318888 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":14369,"bootTime":1689607049,"procs":780,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 19:16:57.510397  318888 start.go:138] virtualization: kvm guest
	I0717 19:16:57.514182  318888 out.go:177] * [pause-795576] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 19:16:57.515618  318888 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 19:16:57.515625  318888 notify.go:220] Checking for updates...
	I0717 19:16:57.517044  318888 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:16:57.518344  318888 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-138069/kubeconfig
	I0717 19:16:57.519583  318888 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-138069/.minikube
	I0717 19:16:57.520932  318888 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 19:16:57.522245  318888 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 19:16:57.525946  318888 config.go:182] Loaded profile config "pause-795576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:16:57.526692  318888 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 19:16:57.551575  318888 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 19:16:57.551692  318888 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 19:16:57.608984  318888 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:96 OomKillDisable:true NGoroutines:100 SystemTime:2023-07-17 19:16:57.599220802 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Arch
itecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 19:16:57.609079  318888 docker.go:294] overlay module found
	I0717 19:16:57.611298  318888 out.go:177] * Using the docker driver based on existing profile
	I0717 19:16:57.613135  318888 start.go:298] selected driver: docker
	I0717 19:16:57.613152  318888 start.go:880] validating driver "docker" against &{Name:pause-795576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:pause-795576 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provi
sioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:16:57.613302  318888 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 19:16:57.613380  318888 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 19:16:57.671778  318888 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:96 OomKillDisable:true NGoroutines:100 SystemTime:2023-07-17 19:16:57.662442343 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Arch
itecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil
> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 19:16:57.672435  318888 cni.go:84] Creating CNI manager for ""
	I0717 19:16:57.672462  318888 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 19:16:57.672476  318888 start_flags.go:319] config:
	{Name:pause-795576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:pause-795576 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni
FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddo
nImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:16:57.675207  318888 out.go:177] * Starting control plane node pause-795576 in cluster pause-795576
	I0717 19:16:57.677058  318888 cache.go:122] Beginning downloading kic base image for docker with crio
	I0717 19:16:57.678782  318888 out.go:177] * Pulling base image ...
	I0717 19:16:57.680323  318888 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 19:16:57.680358  318888 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 19:16:57.680375  318888 preload.go:148] Found local preload: /home/jenkins/minikube-integration/16890-138069/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4
	I0717 19:16:57.680392  318888 cache.go:57] Caching tarball of preloaded images
	I0717 19:16:57.680491  318888 preload.go:174] Found /home/jenkins/minikube-integration/16890-138069/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0717 19:16:57.680504  318888 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on crio
	I0717 19:16:57.680707  318888 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/pause-795576/config.json ...
	I0717 19:16:57.705087  318888 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 19:16:57.705114  318888 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0717 19:16:57.705138  318888 cache.go:195] Successfully downloaded all kic artifacts
	I0717 19:16:57.705175  318888 start.go:365] acquiring machines lock for pause-795576: {Name:mke78b20301a06994e69e7d055bdacc77875fe1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:16:57.705262  318888 start.go:369] acquired machines lock for "pause-795576" in 61.955µs
	I0717 19:16:57.705287  318888 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:16:57.705306  318888 fix.go:54] fixHost starting: 
	I0717 19:16:57.705542  318888 cli_runner.go:164] Run: docker container inspect pause-795576 --format={{.State.Status}}
	I0717 19:16:57.722516  318888 fix.go:102] recreateIfNeeded on pause-795576: state=Running err=<nil>
	W0717 19:16:57.722566  318888 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 19:16:57.724991  318888 out.go:177] * Updating the running docker "pause-795576" container ...
	I0717 19:16:57.726647  318888 machine.go:88] provisioning docker machine ...
	I0717 19:16:57.726686  318888 ubuntu.go:169] provisioning hostname "pause-795576"
	I0717 19:16:57.726803  318888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-795576
	I0717 19:16:57.745514  318888 main.go:141] libmachine: Using SSH client type: native
	I0717 19:16:57.746229  318888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32948 <nil> <nil>}
	I0717 19:16:57.746255  318888 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-795576 && echo "pause-795576" | sudo tee /etc/hostname
	I0717 19:16:57.898721  318888 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-795576
	
	I0717 19:16:57.898804  318888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-795576
	I0717 19:16:57.917622  318888 main.go:141] libmachine: Using SSH client type: native
	I0717 19:16:57.918283  318888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32948 <nil> <nil>}
	I0717 19:16:57.918316  318888 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-795576' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-795576/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-795576' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:16:58.044380  318888 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:16:58.044429  318888 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16890-138069/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-138069/.minikube}
	I0717 19:16:58.044473  318888 ubuntu.go:177] setting up certificates
	I0717 19:16:58.044491  318888 provision.go:83] configureAuth start
	I0717 19:16:58.044560  318888 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-795576
	I0717 19:16:58.061241  318888 provision.go:138] copyHostCerts
	I0717 19:16:58.061297  318888 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-138069/.minikube/ca.pem, removing ...
	I0717 19:16:58.061305  318888 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-138069/.minikube/ca.pem
	I0717 19:16:58.061362  318888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-138069/.minikube/ca.pem (1078 bytes)
	I0717 19:16:58.061470  318888 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-138069/.minikube/cert.pem, removing ...
	I0717 19:16:58.061481  318888 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-138069/.minikube/cert.pem
	I0717 19:16:58.061506  318888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-138069/.minikube/cert.pem (1123 bytes)
	I0717 19:16:58.061610  318888 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-138069/.minikube/key.pem, removing ...
	I0717 19:16:58.061619  318888 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-138069/.minikube/key.pem
	I0717 19:16:58.061640  318888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-138069/.minikube/key.pem (1675 bytes)
	I0717 19:16:58.061695  318888 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca-key.pem org=jenkins.pause-795576 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube pause-795576]
	I0717 19:16:58.148592  318888 provision.go:172] copyRemoteCerts
	I0717 19:16:58.148652  318888 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:16:58.148691  318888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-795576
	I0717 19:16:58.165644  318888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32948 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/pause-795576/id_rsa Username:docker}
	I0717 19:16:58.263926  318888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:16:58.298985  318888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0717 19:16:58.325511  318888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 19:16:58.347771  318888 provision.go:86] duration metric: configureAuth took 303.26298ms
	I0717 19:16:58.347800  318888 ubuntu.go:193] setting minikube options for container-runtime
	I0717 19:16:58.348092  318888 config.go:182] Loaded profile config "pause-795576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:16:58.348218  318888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-795576
	I0717 19:16:58.366484  318888 main.go:141] libmachine: Using SSH client type: native
	I0717 19:16:58.366913  318888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32948 <nil> <nil>}
	I0717 19:16:58.366934  318888 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:17:03.751876  318888 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:17:03.751901  318888 machine.go:91] provisioned docker machine in 6.025236942s
	I0717 19:17:03.751912  318888 start.go:300] post-start starting for "pause-795576" (driver="docker")
	I0717 19:17:03.751926  318888 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:17:03.752022  318888 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:17:03.752071  318888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-795576
	I0717 19:17:03.768715  318888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32948 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/pause-795576/id_rsa Username:docker}
	I0717 19:17:03.861844  318888 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:17:03.865227  318888 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 19:17:03.865254  318888 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 19:17:03.865262  318888 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 19:17:03.865268  318888 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 19:17:03.865279  318888 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-138069/.minikube/addons for local assets ...
	I0717 19:17:03.865329  318888 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-138069/.minikube/files for local assets ...
	I0717 19:17:03.865393  318888 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem -> 1448222.pem in /etc/ssl/certs
	I0717 19:17:03.865471  318888 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:17:03.873647  318888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem --> /etc/ssl/certs/1448222.pem (1708 bytes)
	I0717 19:17:03.899224  318888 start.go:303] post-start completed in 147.294688ms
	I0717 19:17:03.899306  318888 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 19:17:03.899354  318888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-795576
	I0717 19:17:03.918665  318888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32948 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/pause-795576/id_rsa Username:docker}
	I0717 19:17:04.008910  318888 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 19:17:04.013412  318888 fix.go:56] fixHost completed within 6.308108921s
	I0717 19:17:04.013439  318888 start.go:83] releasing machines lock for "pause-795576", held for 6.308164281s
	I0717 19:17:04.013588  318888 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-795576
	I0717 19:17:04.031642  318888 ssh_runner.go:195] Run: cat /version.json
	I0717 19:17:04.031705  318888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-795576
	I0717 19:17:04.031650  318888 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:17:04.031842  318888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-795576
	I0717 19:17:04.051022  318888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32948 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/pause-795576/id_rsa Username:docker}
	I0717 19:17:04.051420  318888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32948 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/pause-795576/id_rsa Username:docker}
	W0717 19:17:04.265682  318888 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 19:17:04.265776  318888 ssh_runner.go:195] Run: systemctl --version
	I0717 19:17:04.270368  318888 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:17:04.420251  318888 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 19:17:04.425006  318888 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:17:04.433398  318888 cni.go:227] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0717 19:17:04.433471  318888 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:17:04.442675  318888 cni.go:265] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 19:17:04.442700  318888 start.go:469] detecting cgroup driver to use...
	I0717 19:17:04.442745  318888 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 19:17:04.442790  318888 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:17:04.456211  318888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:17:04.466899  318888 docker.go:196] disabling cri-docker service (if available) ...
	I0717 19:17:04.466954  318888 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:17:04.480212  318888 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:17:04.491452  318888 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:17:04.595500  318888 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:17:04.708034  318888 docker.go:212] disabling docker service ...
	I0717 19:17:04.708096  318888 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:17:04.719635  318888 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:17:04.729876  318888 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:17:04.906491  318888 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:17:05.364372  318888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:17:05.379441  318888 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:17:05.399739  318888 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:17:05.399818  318888 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:05.473884  318888 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:17:05.473956  318888 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:05.485450  318888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:05.496941  318888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:05.561686  318888 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:17:05.571827  318888 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:17:05.581021  318888 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:17:05.589445  318888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:17:05.891923  318888 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:17:06.202089  318888 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:17:06.202151  318888 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:17:06.205824  318888 start.go:537] Will wait 60s for crictl version
	I0717 19:17:06.205882  318888 ssh_runner.go:195] Run: which crictl
	I0717 19:17:06.209229  318888 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:17:06.244306  318888 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0717 19:17:06.244386  318888 ssh_runner.go:195] Run: crio --version
	I0717 19:17:06.281634  318888 ssh_runner.go:195] Run: crio --version
	I0717 19:17:06.319722  318888 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.6 ...
	I0717 19:17:06.321748  318888 cli_runner.go:164] Run: docker network inspect pause-795576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 19:17:06.340215  318888 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0717 19:17:06.344375  318888 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 19:17:06.344426  318888 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:17:06.385379  318888 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 19:17:06.385403  318888 crio.go:415] Images already preloaded, skipping extraction
	I0717 19:17:06.385455  318888 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:17:06.421516  318888 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 19:17:06.421539  318888 cache_images.go:84] Images are preloaded, skipping loading
	I0717 19:17:06.421596  318888 ssh_runner.go:195] Run: crio config
	I0717 19:17:06.465827  318888 cni.go:84] Creating CNI manager for ""
	I0717 19:17:06.465852  318888 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 19:17:06.465871  318888 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 19:17:06.465889  318888 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-795576 NodeName:pause-795576 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:17:06.466031  318888 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-795576"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:17:06.466104  318888 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=pause-795576 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:pause-795576 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 19:17:06.466152  318888 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 19:17:06.476366  318888 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:17:06.476449  318888 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:17:06.488299  318888 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (422 bytes)
	I0717 19:17:06.508305  318888 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:17:06.527773  318888 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
	I0717 19:17:06.548000  318888 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0717 19:17:06.551677  318888 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/pause-795576 for IP: 192.168.67.2
	I0717 19:17:06.551715  318888 certs.go:190] acquiring lock for shared ca certs: {Name:mk42196ce59710ebf500640671660e2f4656c84e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:17:06.551876  318888 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16890-138069/.minikube/ca.key
	I0717 19:17:06.551932  318888 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16890-138069/.minikube/proxy-client-ca.key
	I0717 19:17:06.552042  318888 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/pause-795576/client.key
	I0717 19:17:06.552136  318888 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/pause-795576/apiserver.key.c7fa3a9e
	I0717 19:17:06.552197  318888 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/pause-795576/proxy-client.key
	I0717 19:17:06.552352  318888 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/144822.pem (1338 bytes)
	W0717 19:17:06.552396  318888 certs.go:433] ignoring /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/144822_empty.pem, impossibly tiny 0 bytes
	I0717 19:17:06.552412  318888 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:17:06.552450  318888 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem (1078 bytes)
	I0717 19:17:06.552495  318888 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:17:06.552528  318888 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/key.pem (1675 bytes)
	I0717 19:17:06.552574  318888 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem (1708 bytes)
	I0717 19:17:06.553429  318888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/pause-795576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 19:17:06.579049  318888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/pause-795576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 19:17:06.602577  318888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/pause-795576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:17:06.626285  318888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/pause-795576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 19:17:06.650369  318888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:17:06.673739  318888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 19:17:06.698501  318888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:17:06.721122  318888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 19:17:06.744207  318888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:17:06.770536  318888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/certs/144822.pem --> /usr/share/ca-certificates/144822.pem (1338 bytes)
	I0717 19:17:06.795889  318888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem --> /usr/share/ca-certificates/1448222.pem (1708 bytes)
	I0717 19:17:06.818698  318888 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:17:06.836903  318888 ssh_runner.go:195] Run: openssl version
	I0717 19:17:06.842176  318888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:17:06.850633  318888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:17:06.853922  318888 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:46 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:17:06.853979  318888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:17:06.860335  318888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:17:06.869169  318888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144822.pem && ln -fs /usr/share/ca-certificates/144822.pem /etc/ssl/certs/144822.pem"
	I0717 19:17:06.880988  318888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144822.pem
	I0717 19:17:06.885329  318888 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:51 /usr/share/ca-certificates/144822.pem
	I0717 19:17:06.885399  318888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144822.pem
	I0717 19:17:06.892308  318888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144822.pem /etc/ssl/certs/51391683.0"
	I0717 19:17:06.901646  318888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1448222.pem && ln -fs /usr/share/ca-certificates/1448222.pem /etc/ssl/certs/1448222.pem"
	I0717 19:17:06.912255  318888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1448222.pem
	I0717 19:17:06.915775  318888 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:51 /usr/share/ca-certificates/1448222.pem
	I0717 19:17:06.915830  318888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1448222.pem
	I0717 19:17:06.922733  318888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1448222.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:17:06.931816  318888 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 19:17:06.935362  318888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:17:06.941436  318888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:17:06.947993  318888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:17:06.954695  318888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:17:06.962346  318888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:17:06.969081  318888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:17:06.977340  318888 kubeadm.go:404] StartCluster: {Name:pause-795576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:pause-795576 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage
-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:17:06.977502  318888 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:17:06.977552  318888 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:17:07.020731  318888 cri.go:89] found id: "3c4ecc96bf0b93221b1cd41bd5f1526256df3f33b13568ef0d2794d1eb83eca3"
	I0717 19:17:07.020755  318888 cri.go:89] found id: "d061dfba20f3f98ca0245fe20749ce94924338455dffdddfee06f3279e0c6ff8"
	I0717 19:17:07.020762  318888 cri.go:89] found id: "f6dec920e96cf327838fec0d74079f5434ac14535a8435b632e013e75fdb381f"
	I0717 19:17:07.020768  318888 cri.go:89] found id: "883de01fb9bb917f2b2c7d622bacd82c691a1416de9b63dd431f017be84c6eee"
	I0717 19:17:07.020773  318888 cri.go:89] found id: "e10be0e9af17f8987e24e478a481f2c0799874fb1457f4c96c792fa327c1e1fe"
	I0717 19:17:07.020778  318888 cri.go:89] found id: "6c2d784dbdd1891bc8ee8488658197313688eb7036abdb4530230dcf2eccb3fa"
	I0717 19:17:07.020784  318888 cri.go:89] found id: "249f2d6748858a003aca4fb5b25038d813f220e6af0a35223e06266835417555"
	I0717 19:17:07.020789  318888 cri.go:89] found id: ""
	I0717 19:17:07.020836  318888 ssh_runner.go:195] Run: sudo runc list -f json
	I0717 19:17:07.047824  318888 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"249f2d6748858a003aca4fb5b25038d813f220e6af0a35223e06266835417555","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/249f2d6748858a003aca4fb5b25038d813f220e6af0a35223e06266835417555/userdata","rootfs":"/var/lib/containers/storage/overlay/fd2aa9b207f49e48d0ff362c959bf5688104dfd9f16135423c4718b9aeebc107/merged","created":"2023-07-17T19:16:23.821869064Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ef1f98f0","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ef1f98f0\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMe
ssagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"249f2d6748858a003aca4fb5b25038d813f220e6af0a35223e06266835417555","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-07-17T19:16:23.737826652Z","io.kubernetes.cri-o.Image":"b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20230511-dc714da8","io.kubernetes.cri-o.ImageRef":"b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-blwth\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"7367b120-9ad2-48ef-a098-f9427cd70ce7\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-blwth_7367b120-9ad2-48ef-a098-f9427cd70ce7/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":
"/var/lib/containers/storage/overlay/fd2aa9b207f49e48d0ff362c959bf5688104dfd9f16135423c4718b9aeebc107/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-blwth_kube-system_7367b120-9ad2-48ef-a098-f9427cd70ce7_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/52b5cc4aad2ad9be691effa49714cc8f6b39045961a40662dd74c5acc9780241/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"52b5cc4aad2ad9be691effa49714cc8f6b39045961a40662dd74c5acc9780241","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-blwth_kube-system_7367b120-9ad2-48ef-a098-f9427cd70ce7_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"se
linux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/7367b120-9ad2-48ef-a098-f9427cd70ce7/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/7367b120-9ad2-48ef-a098-f9427cd70ce7/containers/kindnet-cni/0dcfda50\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/7367b120-9ad2-48ef-a098-f9427cd70ce7/volumes/kubernetes.io~projected/kube-api-access-cl564\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-blwth","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"7367b120-9ad2-48ef-a098-f9427cd70ce7"
,"kubernetes.io/config.seen":"2023-07-17T19:16:23.332665951Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3c4ecc96bf0b93221b1cd41bd5f1526256df3f33b13568ef0d2794d1eb83eca3","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/3c4ecc96bf0b93221b1cd41bd5f1526256df3f33b13568ef0d2794d1eb83eca3/userdata","rootfs":"/var/lib/containers/storage/overlay/984e04278704013a855ebd140b487dec437c7f2b88ad66ca0ad0ae3ccf7a5795/merged","created":"2023-07-17T19:17:05.090088778Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"88ae6cec","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"88ae6cec\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/de
v/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"3c4ecc96bf0b93221b1cd41bd5f1526256df3f33b13568ef0d2794d1eb83eca3","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-07-17T19:17:04.97761483Z","io.kubernetes.cri-o.Image":"08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.27.3","io.kubernetes.cri-o.ImageRef":"08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-795576\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b854cc24c9327d52e830e509c0b45f70\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-795576_b854cc24c9327d52e830e509c0b45f70/kube-apiserver/1.log","io.kubernetes.c
ri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/984e04278704013a855ebd140b487dec437c7f2b88ad66ca0ad0ae3ccf7a5795/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-795576_kube-system_b854cc24c9327d52e830e509c0b45f70_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a59836c7236b4631707596f0175cb8e9117fee3121c48eec6988cf1f1d7d14d4/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"a59836c7236b4631707596f0175cb8e9117fee3121c48eec6988cf1f1d7d14d4","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-795576_kube-system_b854cc24c9327d52e830e509c0b45f70_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b854cc24c9327d52e830e509c0b45f70/co
ntainers/kube-apiserver/e2c9bf0d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b854cc24c9327d52e830e509c0b45f70/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"sel
inux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-pause-795576","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b854cc24c9327d52e830e509c0b45f70","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"b854cc24c9327d52e830e509c0b45f70","kubernetes.io/config.seen":"2023-07-17T19:16:00.717752512Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6c2d784dbdd1891bc8ee8488658197313688eb7036abdb4530230dcf2eccb3fa","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/6c2d784dbdd1891bc8ee8488658197313688eb7036abdb4530230dcf2eccb3fa/userdata","rootfs":"/var/lib/containers/storage/overlay/de01942e3c1323fdb872b4cd4d75c3b8f377b3b580bae9af93589ce307c636f7/merged","created":"2023-07-17T19:16:23.870191032Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"47638398","io.kubernetes.container.na
me":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"47638398\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"6c2d784dbdd1891bc8ee8488658197313688eb7036abdb4530230dcf2eccb3fa","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-07-17T19:16:23.762428287Z","io.kubernetes.cri-o.Image":"5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.27.3","io.kubernetes.cri-o.ImageRef":"5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c","io.kubernetes.cri-o.Labels":"{\"io.kub
ernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-vcv28\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"543aec10-6af6-4088-941a-d684da877b3f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-vcv28_543aec10-6af6-4088-941a-d684da877b3f/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/de01942e3c1323fdb872b4cd4d75c3b8f377b3b580bae9af93589ce307c636f7/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-vcv28_kube-system_543aec10-6af6-4088-941a-d684da877b3f_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/be9d0f26dd7c3ab191a5abf36da714632cbd0f3cda9ce14b052bad43e9c67620/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"be9d0f26dd7c3ab191a5abf36da714632cbd0f3cda9ce14b052bad43e9c67620","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-vcv28_kube-system_543aec10-6af6-4088-941a-d684da877b3f_0","io.k
ubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/543aec10-6af6-4088-941a-d684da877b3f/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/543aec10-6af6-4088-941a-d684da877b3f/containers/kube-proxy/0aa973d9\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/543aec10-6af6-4088-941a-d684da877b3f/volumes/kubernetes.io~configmap/kube-proxy\",\"r
eadonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/543aec10-6af6-4088-941a-d684da877b3f/volumes/kubernetes.io~projected/kube-api-access-hh7kg\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-vcv28","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"543aec10-6af6-4088-941a-d684da877b3f","kubernetes.io/config.seen":"2023-07-17T19:16:23.331165044Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"883de01fb9bb917f2b2c7d622bacd82c691a1416de9b63dd431f017be84c6eee","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/883de01fb9bb917f2b2c7d622bacd82c691a1416de9b63dd431f017be84c6eee/userdata","rootfs":"/var/lib/containers/storage/overlay/642a656c15db83e8d642e2962a223fbbd43a29afb57a204f39604a8ee358de79/merged","created":"20
23-07-17T19:17:04.995896916Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"159e1046","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"159e1046\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"883de01fb9bb917f2b2c7d622bacd82c691a1416de9b63dd431f017be84c6eee","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-07-17T19:17:04.896943836Z","io.kubernetes.cri-o.Image":"41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-sch
eduler:v1.27.3","io.kubernetes.cri-o.ImageRef":"41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-795576\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"400d9ca1adcedd07ea455c43546148bb\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-795576_400d9ca1adcedd07ea455c43546148bb/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/642a656c15db83e8d642e2962a223fbbd43a29afb57a204f39604a8ee358de79/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-795576_kube-system_400d9ca1adcedd07ea455c43546148bb_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/8c0aa2dd28d39ba50cfb2072a76b03120b8e0f39d2e7bd70d851fe70c79305ce/userdata/resolv.conf","io.kube
rnetes.cri-o.SandboxID":"8c0aa2dd28d39ba50cfb2072a76b03120b8e0f39d2e7bd70d851fe70c79305ce","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-795576_kube-system_400d9ca1adcedd07ea455c43546148bb_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/400d9ca1adcedd07ea455c43546148bb/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/400d9ca1adcedd07ea455c43546148bb/containers/kube-scheduler/2ab0d4ac\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-pause-79
5576","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"400d9ca1adcedd07ea455c43546148bb","kubernetes.io/config.hash":"400d9ca1adcedd07ea455c43546148bb","kubernetes.io/config.seen":"2023-07-17T19:16:00.717755611Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d061dfba20f3f98ca0245fe20749ce94924338455dffdddfee06f3279e0c6ff8","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/d061dfba20f3f98ca0245fe20749ce94924338455dffdddfee06f3279e0c6ff8/userdata","rootfs":"/var/lib/containers/storage/overlay/efdd45245e1d01175642c7e1fe9efdd38e2efc70b57415517262a57f4d2a71a1/merged","created":"2023-07-17T19:17:05.080545496Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"97f28112","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes
.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"97f28112\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"d061dfba20f3f98ca0245fe20749ce94924338455dffdddfee06f3279e0c6ff8","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-07-17T19:17:04.966773768Z","io.kubernetes.cri-o.Image":"7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.27.3","io.kubernetes.cri-o.ImageRef":"7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-795576\",\"io.kuberne
tes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"1694f1546c77512884d0dfe3bf2a4ba0\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-795576_1694f1546c77512884d0dfe3bf2a4ba0/kube-controller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/efdd45245e1d01175642c7e1fe9efdd38e2efc70b57415517262a57f4d2a71a1/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-795576_kube-system_1694f1546c77512884d0dfe3bf2a4ba0_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/20ae7a52e858945587eb7f163d34f79bc2b9a6ce18aad1af8d65006001a8854c/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"20ae7a52e858945587eb7f163d34f79bc2b9a6ce18aad1af8d65006001a8854c","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-795576_kube-system_1694f1546c77512884d0dfe3bf2a4ba0_0","io.kube
rnetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/1694f1546c77512884d0dfe3bf2a4ba0/containers/kube-controller-manager/8a7a154a\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/1694f1546c77512884d0dfe3bf2a4ba0/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\
"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-795576","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"1694f1546c77512884d0dfe3bf2a4ba0","kubernetes.io/config.hash":"1694f1546c77512884d0dfe3bf2a4ba0","kubern
etes.io/config.seen":"2023-07-17T19:16:00.717754116Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e10be0e9af17f8987e24e478a481f2c0799874fb1457f4c96c792fa327c1e1fe","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/e10be0e9af17f8987e24e478a481f2c0799874fb1457f4c96c792fa327c1e1fe/userdata","rootfs":"/var/lib/containers/storage/overlay/b3bae204884484a6b35550971ac8a6e805769241762f4c3a7c9c308965995a04/merged","created":"2023-07-17T19:16:55.408378152Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"5bffbcbc","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.ter
minationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"5bffbcbc\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"e10be0e9af17f8987e24e478a481f2c0799874fb1457f4c96c792fa327c1e1fe","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-07-17T19:16:55.363851867Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","io.kubernetes.cri-o.ImageName":"
registry.k8s.io/coredns/coredns:v1.10.1","io.kubernetes.cri-o.ImageRef":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-5d78c9869d-7bhk2\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"113dbc11-1279-4188-b57f-ef1a7476354e\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-5d78c9869d-7bhk2_113dbc11-1279-4188-b57f-ef1a7476354e/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b3bae204884484a6b35550971ac8a6e805769241762f4c3a7c9c308965995a04/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-5d78c9869d-7bhk2_kube-system_113dbc11-1279-4188-b57f-ef1a7476354e_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/d649adf698c9dafde02b8a12fb695beb81795107e7d027d64cadfd235bb2ac80/userdata/resolv.conf","io.kubernetes.cri-o.S
andboxID":"d649adf698c9dafde02b8a12fb695beb81795107e7d027d64cadfd235bb2ac80","io.kubernetes.cri-o.SandboxName":"k8s_coredns-5d78c9869d-7bhk2_kube-system_113dbc11-1279-4188-b57f-ef1a7476354e_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/113dbc11-1279-4188-b57f-ef1a7476354e/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/113dbc11-1279-4188-b57f-ef1a7476354e/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/113dbc11-1279-4188-b57f-ef1a7476354e/containers/coredns/8db12501\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"
/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/113dbc11-1279-4188-b57f-ef1a7476354e/volumes/kubernetes.io~projected/kube-api-access-k8bj6\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-5d78c9869d-7bhk2","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"113dbc11-1279-4188-b57f-ef1a7476354e","kubernetes.io/config.seen":"2023-07-17T19:16:54.973191308Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f6dec920e96cf327838fec0d74079f5434ac14535a8435b632e013e75fdb381f","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/f6dec920e96cf327838fec0d74079f5434ac14535a8435b632e013e75fdb381f/userdata","rootfs":"/var/lib/containers/storage/overlay/346787948b55ebcb618ece9de2dd56e22018cfd47ef405d4603a6f740a88967c/merged","created":"2023-07-17T19:17:05.075290261Z","annotations":{"io.container.manager":"cri-o
","io.kubernetes.container.hash":"95733f07","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"95733f07\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"f6dec920e96cf327838fec0d74079f5434ac14535a8435b632e013e75fdb381f","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-07-17T19:17:04.925145157Z","io.kubernetes.cri-o.Image":"86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.7-0","io.kubernetes.cri-o.ImageRef":"86b6af7dd652c1b38118be1c338e9354b33469e69a218f
7e290a0ca5304ad681","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-795576\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"d64400546f98bb129596be581950ced8\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-795576_d64400546f98bb129596be581950ced8/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/346787948b55ebcb618ece9de2dd56e22018cfd47ef405d4603a6f740a88967c/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-795576_kube-system_d64400546f98bb129596be581950ced8_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/977426b5ad0d404749f9b90f6b18505fa16b074792252144b0b36642498b9e5c/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"977426b5ad0d404749f9b90f6b18505fa16b074792252144b0b36642498b9e5c","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-795576_kube-system_d644
00546f98bb129596be581950ced8_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/d64400546f98bb129596be581950ced8/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/d64400546f98bb129596be581950ced8/containers/etcd/cf0ef901\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-pause-795576","io.kubernetes.pod.namespace":"kube-sys
tem","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"d64400546f98bb129596be581950ced8","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"d64400546f98bb129596be581950ced8","kubernetes.io/config.seen":"2023-07-17T19:16:00.717746912Z","kubernetes.io/config.source":"file"},"owner":"root"}]
	I0717 19:17:07.048354  318888 cri.go:126] list returned 7 containers
	I0717 19:17:07.048372  318888 cri.go:129] container: {ID:249f2d6748858a003aca4fb5b25038d813f220e6af0a35223e06266835417555 Status:stopped}
	I0717 19:17:07.048392  318888 cri.go:135] skipping {249f2d6748858a003aca4fb5b25038d813f220e6af0a35223e06266835417555 stopped}: state = "stopped", want "paused"
	I0717 19:17:07.048406  318888 cri.go:129] container: {ID:3c4ecc96bf0b93221b1cd41bd5f1526256df3f33b13568ef0d2794d1eb83eca3 Status:stopped}
	I0717 19:17:07.048419  318888 cri.go:135] skipping {3c4ecc96bf0b93221b1cd41bd5f1526256df3f33b13568ef0d2794d1eb83eca3 stopped}: state = "stopped", want "paused"
	I0717 19:17:07.048429  318888 cri.go:129] container: {ID:6c2d784dbdd1891bc8ee8488658197313688eb7036abdb4530230dcf2eccb3fa Status:stopped}
	I0717 19:17:07.048437  318888 cri.go:135] skipping {6c2d784dbdd1891bc8ee8488658197313688eb7036abdb4530230dcf2eccb3fa stopped}: state = "stopped", want "paused"
	I0717 19:17:07.048447  318888 cri.go:129] container: {ID:883de01fb9bb917f2b2c7d622bacd82c691a1416de9b63dd431f017be84c6eee Status:stopped}
	I0717 19:17:07.048460  318888 cri.go:135] skipping {883de01fb9bb917f2b2c7d622bacd82c691a1416de9b63dd431f017be84c6eee stopped}: state = "stopped", want "paused"
	I0717 19:17:07.048475  318888 cri.go:129] container: {ID:d061dfba20f3f98ca0245fe20749ce94924338455dffdddfee06f3279e0c6ff8 Status:stopped}
	I0717 19:17:07.048486  318888 cri.go:135] skipping {d061dfba20f3f98ca0245fe20749ce94924338455dffdddfee06f3279e0c6ff8 stopped}: state = "stopped", want "paused"
	I0717 19:17:07.048493  318888 cri.go:129] container: {ID:e10be0e9af17f8987e24e478a481f2c0799874fb1457f4c96c792fa327c1e1fe Status:stopped}
	I0717 19:17:07.048505  318888 cri.go:135] skipping {e10be0e9af17f8987e24e478a481f2c0799874fb1457f4c96c792fa327c1e1fe stopped}: state = "stopped", want "paused"
	I0717 19:17:07.048515  318888 cri.go:129] container: {ID:f6dec920e96cf327838fec0d74079f5434ac14535a8435b632e013e75fdb381f Status:stopped}
	I0717 19:17:07.048523  318888 cri.go:135] skipping {f6dec920e96cf327838fec0d74079f5434ac14535a8435b632e013e75fdb381f stopped}: state = "stopped", want "paused"
	I0717 19:17:07.048577  318888 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:17:07.059554  318888 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 19:17:07.059576  318888 kubeadm.go:636] restartCluster start
	I0717 19:17:07.059630  318888 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:17:07.069388  318888 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:07.070401  318888 kubeconfig.go:92] found "pause-795576" server: "https://192.168.67.2:8443"
	I0717 19:17:07.071928  318888 kapi.go:59] client config for pause-795576: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/profiles/pause-795576/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/profiles/pause-795576/client.key", CAFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 19:17:07.072899  318888 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:17:07.081588  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:07.081647  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:07.091239  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:07.592253  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:07.592317  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:07.602874  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:08.091426  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:08.091520  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:08.103609  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:08.592219  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:08.592301  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:08.606487  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:09.092214  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:09.092291  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:09.102918  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:09.591408  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:09.591498  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:09.602029  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:10.091629  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:10.091723  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:10.102016  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:10.591554  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:10.591677  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:10.601759  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:11.092357  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:11.092435  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:11.101989  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:11.591428  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:11.591509  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:11.601562  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:12.092211  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:12.092316  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:12.102546  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:12.592341  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:12.592414  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:12.602282  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:13.091814  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:13.091898  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:13.102925  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:13.591410  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:13.591497  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:13.601902  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:14.092039  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:14.092143  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:14.102651  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:14.591810  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:14.591897  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:14.601989  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:15.091535  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:15.091626  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:15.101734  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:15.591299  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:15.591383  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:15.601527  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:16.092099  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:16.092203  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:16.102156  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:16.591684  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:16.591790  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:16.601816  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:17.082386  318888 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 19:17:17.082436  318888 kubeadm.go:1128] stopping kube-system containers ...
	I0717 19:17:17.082451  318888 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:17:17.082517  318888 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:17:17.118915  318888 cri.go:89] found id: "ab7184693b8535872a6449bd84279882db6966e0d108be297584389fcbd446cd"
	I0717 19:17:17.118943  318888 cri.go:89] found id: "3c4ecc96bf0b93221b1cd41bd5f1526256df3f33b13568ef0d2794d1eb83eca3"
	I0717 19:17:17.118951  318888 cri.go:89] found id: "d061dfba20f3f98ca0245fe20749ce94924338455dffdddfee06f3279e0c6ff8"
	I0717 19:17:17.118957  318888 cri.go:89] found id: "f6dec920e96cf327838fec0d74079f5434ac14535a8435b632e013e75fdb381f"
	I0717 19:17:17.118963  318888 cri.go:89] found id: "883de01fb9bb917f2b2c7d622bacd82c691a1416de9b63dd431f017be84c6eee"
	I0717 19:17:17.118969  318888 cri.go:89] found id: "e10be0e9af17f8987e24e478a481f2c0799874fb1457f4c96c792fa327c1e1fe"
	I0717 19:17:17.118976  318888 cri.go:89] found id: "6c2d784dbdd1891bc8ee8488658197313688eb7036abdb4530230dcf2eccb3fa"
	I0717 19:17:17.118981  318888 cri.go:89] found id: "249f2d6748858a003aca4fb5b25038d813f220e6af0a35223e06266835417555"
	I0717 19:17:17.118985  318888 cri.go:89] found id: ""
	I0717 19:17:17.118990  318888 cri.go:234] Stopping containers: [ab7184693b8535872a6449bd84279882db6966e0d108be297584389fcbd446cd 3c4ecc96bf0b93221b1cd41bd5f1526256df3f33b13568ef0d2794d1eb83eca3 d061dfba20f3f98ca0245fe20749ce94924338455dffdddfee06f3279e0c6ff8 f6dec920e96cf327838fec0d74079f5434ac14535a8435b632e013e75fdb381f 883de01fb9bb917f2b2c7d622bacd82c691a1416de9b63dd431f017be84c6eee e10be0e9af17f8987e24e478a481f2c0799874fb1457f4c96c792fa327c1e1fe 6c2d784dbdd1891bc8ee8488658197313688eb7036abdb4530230dcf2eccb3fa 249f2d6748858a003aca4fb5b25038d813f220e6af0a35223e06266835417555]
	I0717 19:17:17.119041  318888 ssh_runner.go:195] Run: which crictl
	I0717 19:17:17.122491  318888 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 ab7184693b8535872a6449bd84279882db6966e0d108be297584389fcbd446cd 3c4ecc96bf0b93221b1cd41bd5f1526256df3f33b13568ef0d2794d1eb83eca3 d061dfba20f3f98ca0245fe20749ce94924338455dffdddfee06f3279e0c6ff8 f6dec920e96cf327838fec0d74079f5434ac14535a8435b632e013e75fdb381f 883de01fb9bb917f2b2c7d622bacd82c691a1416de9b63dd431f017be84c6eee e10be0e9af17f8987e24e478a481f2c0799874fb1457f4c96c792fa327c1e1fe 6c2d784dbdd1891bc8ee8488658197313688eb7036abdb4530230dcf2eccb3fa 249f2d6748858a003aca4fb5b25038d813f220e6af0a35223e06266835417555
	I0717 19:17:17.531585  318888 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:17:17.626924  318888 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:17:17.636069  318888 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jul 17 19:15 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jul 17 19:16 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Jul 17 19:16 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jul 17 19:16 /etc/kubernetes/scheduler.conf
	
	I0717 19:17:17.636156  318888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:17:17.644854  318888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:17:17.653576  318888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:17:17.662019  318888 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:17.662095  318888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:17:17.670391  318888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:17:17.679253  318888 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:17.679334  318888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:17:17.687631  318888 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:17:17.696369  318888 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 19:17:17.696393  318888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:17:17.748307  318888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:17:18.632331  318888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:17:18.797010  318888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:17:18.853832  318888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:17:18.986878  318888 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:17:18.986960  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:17:19.498103  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:17:19.997578  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:17:20.497612  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:17:20.509814  318888 api_server.go:72] duration metric: took 1.522935408s to wait for apiserver process to appear ...
	I0717 19:17:20.509839  318888 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:17:20.509859  318888 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0717 19:17:22.411960  318888 api_server.go:279] https://192.168.67.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:17:22.412022  318888 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:17:22.912688  318888 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0717 19:17:22.918471  318888 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:17:22.918506  318888 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:17:23.413158  318888 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0717 19:17:23.418644  318888 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:17:23.418672  318888 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:17:23.912182  318888 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0717 19:17:23.917834  318888 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0717 19:17:23.926485  318888 api_server.go:141] control plane version: v1.27.3
	I0717 19:17:23.926517  318888 api_server.go:131] duration metric: took 3.416671828s to wait for apiserver health ...
	I0717 19:17:23.926528  318888 cni.go:84] Creating CNI manager for ""
	I0717 19:17:23.926537  318888 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 19:17:23.929204  318888 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 19:17:23.930742  318888 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 19:17:23.934475  318888 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0717 19:17:23.934493  318888 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 19:17:23.950602  318888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 19:17:24.599328  318888 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:17:24.606223  318888 system_pods.go:59] 7 kube-system pods found
	I0717 19:17:24.606260  318888 system_pods.go:61] "coredns-5d78c9869d-7bhk2" [113dbc11-1279-4188-b57f-ef1a7476354e] Running
	I0717 19:17:24.606270  318888 system_pods.go:61] "etcd-pause-795576" [cb60766e-050b-459f-ab27-b4eb96c1cfb1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:17:24.606283  318888 system_pods.go:61] "kindnet-blwth" [7367b120-9ad2-48ef-a098-f9427cd70ce7] Running
	I0717 19:17:24.606295  318888 system_pods.go:61] "kube-apiserver-pause-795576" [deacff2a-f4f5-4573-985b-f50aec648951] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:17:24.606305  318888 system_pods.go:61] "kube-controller-manager-pause-795576" [7fe105ea-5ec8-4082-8c94-109c5613c844] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:17:24.606312  318888 system_pods.go:61] "kube-proxy-vcv28" [543aec10-6af6-4088-941a-d684da877b3f] Running
	I0717 19:17:24.606330  318888 system_pods.go:61] "kube-scheduler-pause-795576" [282169f5-c63d-4d71-9dd5-180ca707ac61] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:17:24.606337  318888 system_pods.go:74] duration metric: took 6.98622ms to wait for pod list to return data ...
	I0717 19:17:24.606346  318888 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:17:24.609591  318888 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0717 19:17:24.609618  318888 node_conditions.go:123] node cpu capacity is 8
	I0717 19:17:24.609627  318888 node_conditions.go:105] duration metric: took 3.276797ms to run NodePressure ...
	I0717 19:17:24.609647  318888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:17:24.829854  318888 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 19:17:24.834970  318888 kubeadm.go:787] kubelet initialised
	I0717 19:17:24.834992  318888 kubeadm.go:788] duration metric: took 5.114607ms waiting for restarted kubelet to initialise ...
	I0717 19:17:24.835001  318888 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:17:24.840370  318888 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-7bhk2" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:24.845574  318888 pod_ready.go:92] pod "coredns-5d78c9869d-7bhk2" in "kube-system" namespace has status "Ready":"True"
	I0717 19:17:24.845597  318888 pod_ready.go:81] duration metric: took 5.201567ms waiting for pod "coredns-5d78c9869d-7bhk2" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:24.845608  318888 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:26.856893  318888 pod_ready.go:102] pod "etcd-pause-795576" in "kube-system" namespace has status "Ready":"False"
	I0717 19:17:28.856999  318888 pod_ready.go:102] pod "etcd-pause-795576" in "kube-system" namespace has status "Ready":"False"
	I0717 19:17:31.355930  318888 pod_ready.go:92] pod "etcd-pause-795576" in "kube-system" namespace has status "Ready":"True"
	I0717 19:17:31.355952  318888 pod_ready.go:81] duration metric: took 6.510338235s waiting for pod "etcd-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:31.355965  318888 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:33.368050  318888 pod_ready.go:102] pod "kube-apiserver-pause-795576" in "kube-system" namespace has status "Ready":"False"
	I0717 19:17:34.867118  318888 pod_ready.go:92] pod "kube-apiserver-pause-795576" in "kube-system" namespace has status "Ready":"True"
	I0717 19:17:34.867141  318888 pod_ready.go:81] duration metric: took 3.511170042s waiting for pod "kube-apiserver-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:34.867154  318888 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:34.872200  318888 pod_ready.go:92] pod "kube-controller-manager-pause-795576" in "kube-system" namespace has status "Ready":"True"
	I0717 19:17:34.872222  318888 pod_ready.go:81] duration metric: took 5.061874ms waiting for pod "kube-controller-manager-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:34.872234  318888 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vcv28" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:34.876974  318888 pod_ready.go:92] pod "kube-proxy-vcv28" in "kube-system" namespace has status "Ready":"True"
	I0717 19:17:34.876994  318888 pod_ready.go:81] duration metric: took 4.75416ms waiting for pod "kube-proxy-vcv28" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:34.877002  318888 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:34.882008  318888 pod_ready.go:92] pod "kube-scheduler-pause-795576" in "kube-system" namespace has status "Ready":"True"
	I0717 19:17:34.882025  318888 pod_ready.go:81] duration metric: took 5.017488ms waiting for pod "kube-scheduler-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:34.882031  318888 pod_ready.go:38] duration metric: took 10.047022086s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:17:34.882048  318888 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 19:17:34.889459  318888 ops.go:34] apiserver oom_adj: -16
	I0717 19:17:34.889481  318888 kubeadm.go:640] restartCluster took 27.829897508s
	I0717 19:17:34.889489  318888 kubeadm.go:406] StartCluster complete in 27.912159818s
	I0717 19:17:34.889507  318888 settings.go:142] acquiring lock: {Name:mk9765434b8f4871dd605367f6caa71617d51b6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:17:34.889566  318888 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16890-138069/kubeconfig
	I0717 19:17:34.890985  318888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-138069/kubeconfig: {Name:mkc53c034e0e90a78d013670a58d5882070a3e3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:17:34.891218  318888 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 19:17:34.891367  318888 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0717 19:17:34.893621  318888 out.go:177] * Enabled addons: 
	I0717 19:17:34.891570  318888 config.go:182] Loaded profile config "pause-795576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:17:34.892386  318888 kapi.go:59] client config for pause-795576: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/profiles/pause-795576/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/profiles/pause-795576/client.key", CAFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 19:17:34.895686  318888 addons.go:502] enable addons completed in 4.319254ms: enabled=[]
	I0717 19:17:34.900065  318888 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-795576" context rescaled to 1 replicas
	I0717 19:17:34.900105  318888 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:17:34.901715  318888 out.go:177] * Verifying Kubernetes components...
	I0717 19:17:34.903227  318888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:17:34.975028  318888 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0717 19:17:34.975040  318888 node_ready.go:35] waiting up to 6m0s for node "pause-795576" to be "Ready" ...
	I0717 19:17:34.977527  318888 node_ready.go:49] node "pause-795576" has status "Ready":"True"
	I0717 19:17:34.977547  318888 node_ready.go:38] duration metric: took 2.489317ms waiting for node "pause-795576" to be "Ready" ...
	I0717 19:17:34.977557  318888 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:17:34.982915  318888 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-7bhk2" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:35.264001  318888 pod_ready.go:92] pod "coredns-5d78c9869d-7bhk2" in "kube-system" namespace has status "Ready":"True"
	I0717 19:17:35.264029  318888 pod_ready.go:81] duration metric: took 281.084061ms waiting for pod "coredns-5d78c9869d-7bhk2" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:35.264039  318888 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:35.664667  318888 pod_ready.go:92] pod "etcd-pause-795576" in "kube-system" namespace has status "Ready":"True"
	I0717 19:17:35.664695  318888 pod_ready.go:81] duration metric: took 400.647826ms waiting for pod "etcd-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:35.664711  318888 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:36.064629  318888 pod_ready.go:92] pod "kube-apiserver-pause-795576" in "kube-system" namespace has status "Ready":"True"
	I0717 19:17:36.064655  318888 pod_ready.go:81] duration metric: took 399.935907ms waiting for pod "kube-apiserver-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:36.064666  318888 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:36.464603  318888 pod_ready.go:92] pod "kube-controller-manager-pause-795576" in "kube-system" namespace has status "Ready":"True"
	I0717 19:17:36.464628  318888 pod_ready.go:81] duration metric: took 399.955789ms waiting for pod "kube-controller-manager-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:36.464638  318888 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vcv28" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:36.864714  318888 pod_ready.go:92] pod "kube-proxy-vcv28" in "kube-system" namespace has status "Ready":"True"
	I0717 19:17:36.864736  318888 pod_ready.go:81] duration metric: took 400.092782ms waiting for pod "kube-proxy-vcv28" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:36.864745  318888 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:37.264940  318888 pod_ready.go:92] pod "kube-scheduler-pause-795576" in "kube-system" namespace has status "Ready":"True"
	I0717 19:17:37.264967  318888 pod_ready.go:81] duration metric: took 400.214774ms waiting for pod "kube-scheduler-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:37.264981  318888 pod_ready.go:38] duration metric: took 2.287410265s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:17:37.265001  318888 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:17:37.265055  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:17:37.276679  318888 api_server.go:72] duration metric: took 2.376534107s to wait for apiserver process to appear ...
	I0717 19:17:37.276709  318888 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:17:37.276726  318888 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0717 19:17:37.281249  318888 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0717 19:17:37.282295  318888 api_server.go:141] control plane version: v1.27.3
	I0717 19:17:37.282319  318888 api_server.go:131] duration metric: took 5.603456ms to wait for apiserver health ...
	I0717 19:17:37.282329  318888 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:17:37.467541  318888 system_pods.go:59] 7 kube-system pods found
	I0717 19:17:37.467573  318888 system_pods.go:61] "coredns-5d78c9869d-7bhk2" [113dbc11-1279-4188-b57f-ef1a7476354e] Running
	I0717 19:17:37.467581  318888 system_pods.go:61] "etcd-pause-795576" [cb60766e-050b-459f-ab27-b4eb96c1cfb1] Running
	I0717 19:17:37.467586  318888 system_pods.go:61] "kindnet-blwth" [7367b120-9ad2-48ef-a098-f9427cd70ce7] Running
	I0717 19:17:37.467592  318888 system_pods.go:61] "kube-apiserver-pause-795576" [deacff2a-f4f5-4573-985b-f50aec648951] Running
	I0717 19:17:37.467597  318888 system_pods.go:61] "kube-controller-manager-pause-795576" [7fe105ea-5ec8-4082-8c94-109c5613c844] Running
	I0717 19:17:37.467603  318888 system_pods.go:61] "kube-proxy-vcv28" [543aec10-6af6-4088-941a-d684da877b3f] Running
	I0717 19:17:37.467608  318888 system_pods.go:61] "kube-scheduler-pause-795576" [282169f5-c63d-4d71-9dd5-180ca707ac61] Running
	I0717 19:17:37.467618  318888 system_pods.go:74] duration metric: took 185.280635ms to wait for pod list to return data ...
	I0717 19:17:37.467628  318888 default_sa.go:34] waiting for default service account to be created ...
	I0717 19:17:37.664184  318888 default_sa.go:45] found service account: "default"
	I0717 19:17:37.664211  318888 default_sa.go:55] duration metric: took 196.57685ms for default service account to be created ...
	I0717 19:17:37.664219  318888 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 19:17:37.867944  318888 system_pods.go:86] 7 kube-system pods found
	I0717 19:17:37.868007  318888 system_pods.go:89] "coredns-5d78c9869d-7bhk2" [113dbc11-1279-4188-b57f-ef1a7476354e] Running
	I0717 19:17:37.868020  318888 system_pods.go:89] "etcd-pause-795576" [cb60766e-050b-459f-ab27-b4eb96c1cfb1] Running
	I0717 19:17:37.868025  318888 system_pods.go:89] "kindnet-blwth" [7367b120-9ad2-48ef-a098-f9427cd70ce7] Running
	I0717 19:17:37.868032  318888 system_pods.go:89] "kube-apiserver-pause-795576" [deacff2a-f4f5-4573-985b-f50aec648951] Running
	I0717 19:17:37.868036  318888 system_pods.go:89] "kube-controller-manager-pause-795576" [7fe105ea-5ec8-4082-8c94-109c5613c844] Running
	I0717 19:17:37.868041  318888 system_pods.go:89] "kube-proxy-vcv28" [543aec10-6af6-4088-941a-d684da877b3f] Running
	I0717 19:17:37.868045  318888 system_pods.go:89] "kube-scheduler-pause-795576" [282169f5-c63d-4d71-9dd5-180ca707ac61] Running
	I0717 19:17:37.868051  318888 system_pods.go:126] duration metric: took 203.827832ms to wait for k8s-apps to be running ...
	I0717 19:17:37.868058  318888 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 19:17:37.868104  318888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:17:37.882536  318888 system_svc.go:56] duration metric: took 14.46342ms WaitForService to wait for kubelet.
	I0717 19:17:37.882566  318888 kubeadm.go:581] duration metric: took 2.982428447s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 19:17:37.882591  318888 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:17:38.064900  318888 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0717 19:17:38.064924  318888 node_conditions.go:123] node cpu capacity is 8
	I0717 19:17:38.064935  318888 node_conditions.go:105] duration metric: took 182.337085ms to run NodePressure ...
	I0717 19:17:38.064945  318888 start.go:228] waiting for startup goroutines ...
	I0717 19:17:38.064951  318888 start.go:233] waiting for cluster config update ...
	I0717 19:17:38.064958  318888 start.go:242] writing updated cluster config ...
	I0717 19:17:38.065224  318888 ssh_runner.go:195] Run: rm -f paused
	I0717 19:17:38.120289  318888 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 19:17:38.122981  318888 out.go:177] * Done! kubectl is now configured to use "pause-795576" cluster and "default" namespace by default

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-795576
helpers_test.go:235: (dbg) docker inspect pause-795576:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b41214069f99263d678e55389354861a53d0485040b7e5f3a65f045ba3bed2d3",
	        "Created": "2023-07-17T19:15:49.103413251Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 299832,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T19:15:49.436435024Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/b41214069f99263d678e55389354861a53d0485040b7e5f3a65f045ba3bed2d3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b41214069f99263d678e55389354861a53d0485040b7e5f3a65f045ba3bed2d3/hostname",
	        "HostsPath": "/var/lib/docker/containers/b41214069f99263d678e55389354861a53d0485040b7e5f3a65f045ba3bed2d3/hosts",
	        "LogPath": "/var/lib/docker/containers/b41214069f99263d678e55389354861a53d0485040b7e5f3a65f045ba3bed2d3/b41214069f99263d678e55389354861a53d0485040b7e5f3a65f045ba3bed2d3-json.log",
	        "Name": "/pause-795576",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-795576:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-795576",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/107bf35e8ddc6283bec58cb350be16e6cc2c143f61e57c6923c1a6d71f2cc2cd-init/diff:/var/lib/docker/overlay2/d8b40fcaabfbbb6eb20cfe7c35f752b4babaa96b29803507d5f63d9939e9e0f0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/107bf35e8ddc6283bec58cb350be16e6cc2c143f61e57c6923c1a6d71f2cc2cd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/107bf35e8ddc6283bec58cb350be16e6cc2c143f61e57c6923c1a6d71f2cc2cd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/107bf35e8ddc6283bec58cb350be16e6cc2c143f61e57c6923c1a6d71f2cc2cd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-795576",
	                "Source": "/var/lib/docker/volumes/pause-795576/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-795576",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-795576",
	                "name.minikube.sigs.k8s.io": "pause-795576",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "821d2b3df00169f19368e5d62df45fb50dabe902374bbdce37f18f99cfa644c3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32948"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32947"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32944"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32946"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32945"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/821d2b3df001",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-795576": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b41214069f99",
	                        "pause-795576"
	                    ],
	                    "NetworkID": "61bb7c620e400c28091a747ba0fe9ed8a58ea2f099bdaf767519ccfad62d2f34",
	                    "EndpointID": "3ff405adca2912a7b828b70034ffd15b4dd2933d37631540b521d0907702e55a",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-795576 -n pause-795576
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-795576 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-795576 logs -n 25: (1.470833359s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-404036             | NoKubernetes-404036       | jenkins | v1.30.1 | 17 Jul 23 19:14 UTC | 17 Jul 23 19:15 UTC |
	|         | --no-kubernetes                    |                           |         |         |                     |                     |
	|         | --driver=docker                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-404036             | NoKubernetes-404036       | jenkins | v1.30.1 | 17 Jul 23 19:15 UTC | 17 Jul 23 19:15 UTC |
	| start   | -p NoKubernetes-404036             | NoKubernetes-404036       | jenkins | v1.30.1 | 17 Jul 23 19:15 UTC | 17 Jul 23 19:15 UTC |
	|         | --no-kubernetes                    |                           |         |         |                     |                     |
	|         | --driver=docker                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-404036 sudo        | NoKubernetes-404036       | jenkins | v1.30.1 | 17 Jul 23 19:15 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-404036             | NoKubernetes-404036       | jenkins | v1.30.1 | 17 Jul 23 19:15 UTC | 17 Jul 23 19:15 UTC |
	| start   | -p NoKubernetes-404036             | NoKubernetes-404036       | jenkins | v1.30.1 | 17 Jul 23 19:15 UTC | 17 Jul 23 19:15 UTC |
	|         | --driver=docker                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-404036 sudo        | NoKubernetes-404036       | jenkins | v1.30.1 | 17 Jul 23 19:15 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-404036             | NoKubernetes-404036       | jenkins | v1.30.1 | 17 Jul 23 19:15 UTC | 17 Jul 23 19:15 UTC |
	| start   | -p force-systemd-env-020920        | force-systemd-env-020920  | jenkins | v1.30.1 | 17 Jul 23 19:15 UTC | 17 Jul 23 19:15 UTC |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker               |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p offline-crio-369384             | offline-crio-369384       | jenkins | v1.30.1 | 17 Jul 23 19:15 UTC | 17 Jul 23 19:15 UTC |
	| start   | -p pause-795576 --memory=2048      | pause-795576              | jenkins | v1.30.1 | 17 Jul 23 19:15 UTC | 17 Jul 23 19:16 UTC |
	|         | --install-addons=false             |                           |         |         |                     |                     |
	|         | --wait=all --driver=docker         |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p running-upgrade-383497          | running-upgrade-383497    | jenkins | v1.30.1 | 17 Jul 23 19:15 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker               |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-383497          | running-upgrade-383497    | jenkins | v1.30.1 | 17 Jul 23 19:15 UTC | 17 Jul 23 19:15 UTC |
	| delete  | -p force-systemd-env-020920        | force-systemd-env-020920  | jenkins | v1.30.1 | 17 Jul 23 19:15 UTC | 17 Jul 23 19:15 UTC |
	| start   | -p stopped-upgrade-435958          | stopped-upgrade-435958    | jenkins | v1.30.1 | 17 Jul 23 19:15 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker               |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p force-systemd-flag-811473       | force-systemd-flag-811473 | jenkins | v1.30.1 | 17 Jul 23 19:15 UTC | 17 Jul 23 19:16 UTC |
	|         | --memory=2048 --force-systemd      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker               |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-435958          | stopped-upgrade-435958    | jenkins | v1.30.1 | 17 Jul 23 19:16 UTC | 17 Jul 23 19:16 UTC |
	| start   | -p kubernetes-upgrade-677764       | kubernetes-upgrade-677764 | jenkins | v1.30.1 | 17 Jul 23 19:16 UTC | 17 Jul 23 19:17 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0       |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker               |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-811473 ssh cat  | force-systemd-flag-811473 | jenkins | v1.30.1 | 17 Jul 23 19:16 UTC | 17 Jul 23 19:16 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-811473       | force-systemd-flag-811473 | jenkins | v1.30.1 | 17 Jul 23 19:16 UTC | 17 Jul 23 19:16 UTC |
	| start   | -p cert-expiration-383715          | cert-expiration-383715    | jenkins | v1.30.1 | 17 Jul 23 19:16 UTC | 17 Jul 23 19:16 UTC |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --cert-expiration=3m               |                           |         |         |                     |                     |
	|         | --driver=docker                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p pause-795576                    | pause-795576              | jenkins | v1.30.1 | 17 Jul 23 19:16 UTC | 17 Jul 23 19:17 UTC |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker               |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-677764       | kubernetes-upgrade-677764 | jenkins | v1.30.1 | 17 Jul 23 19:17 UTC | 17 Jul 23 19:17 UTC |
	| start   | -p kubernetes-upgrade-677764       | kubernetes-upgrade-677764 | jenkins | v1.30.1 | 17 Jul 23 19:17 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3       |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker               |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p missing-upgrade-629154          | missing-upgrade-629154    | jenkins | v1.30.1 | 17 Jul 23 19:17 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker               |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 19:17:04
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 19:17:04.135087  320967 out.go:296] Setting OutFile to fd 1 ...
	I0717 19:17:04.135207  320967 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:17:04.135219  320967 out.go:309] Setting ErrFile to fd 2...
	I0717 19:17:04.135225  320967 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:17:04.135448  320967 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-138069/.minikube/bin
	I0717 19:17:04.136085  320967 out.go:303] Setting JSON to false
	I0717 19:17:04.137930  320967 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":14375,"bootTime":1689607049,"procs":746,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 19:17:04.138007  320967 start.go:138] virtualization: kvm guest
	I0717 19:17:04.142980  320967 out.go:177] * [missing-upgrade-629154] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 19:17:04.145501  320967 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 19:17:04.145502  320967 notify.go:220] Checking for updates...
	I0717 19:17:04.147300  320967 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:17:04.149151  320967 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-138069/kubeconfig
	I0717 19:17:04.150918  320967 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-138069/.minikube
	I0717 19:17:04.152658  320967 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 19:17:04.154300  320967 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 19:17:04.156259  320967 config.go:182] Loaded profile config "missing-upgrade-629154": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0717 19:17:04.156287  320967 start_flags.go:695] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 19:17:04.158301  320967 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0717 19:17:04.159849  320967 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 19:17:04.185413  320967 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 19:17:04.185545  320967 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 19:17:04.247451  320967 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:66 SystemTime:2023-07-17 19:17:04.238013565 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 19:17:04.247563  320967 docker.go:294] overlay module found
	I0717 19:17:04.250184  320967 out.go:177] * Using the docker driver based on existing profile
	I0717 19:17:04.252070  320967 start.go:298] selected driver: docker
	I0717 19:17:04.252089  320967 start.go:880] validating driver "docker" against &{Name:missing-upgrade-629154 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:missing-upgrade-629154 Namespace: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:17:04.252203  320967 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 19:17:04.253010  320967 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 19:17:04.317268  320967 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:66 SystemTime:2023-07-17 19:17:04.308322866 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 19:17:04.317672  320967 cni.go:84] Creating CNI manager for ""
	I0717 19:17:04.317704  320967 cni.go:130] EnableDefaultCNI is true, recommending bridge
	I0717 19:17:04.317716  320967 start_flags.go:319] config:
	{Name:missing-upgrade-629154 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:missing-upgrade-629154 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:17:04.320368  320967 out.go:177] * Starting control plane node missing-upgrade-629154 in cluster missing-upgrade-629154
	I0717 19:17:04.322085  320967 cache.go:122] Beginning downloading kic base image for docker with crio
	I0717 19:17:04.323813  320967 out.go:177] * Pulling base image ...
	I0717 19:17:04.325531  320967 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0717 19:17:04.325622  320967 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 19:17:04.342675  320967 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 19:17:04.342714  320967 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	W0717 19:17:04.353789  320967 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0717 19:17:04.354060  320967 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/missing-upgrade-629154/config.json ...
	I0717 19:17:04.354086  320967 cache.go:107] acquiring lock: {Name:mkf1a1130734b2d756a0657ef9722999f48d6c2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:17:04.354134  320967 cache.go:107] acquiring lock: {Name:mkd212c5db1f99d1e2779ee03e5908ac3123cf12 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:17:04.354149  320967 cache.go:107] acquiring lock: {Name:mkdd7c36248d43a8ed2da602bcfcaf77d0ba431f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:17:04.354204  320967 cache.go:107] acquiring lock: {Name:mkba162517b3c0d46459927d0c5ebda7dc236b77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:17:04.354229  320967 cache.go:115] /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0717 19:17:04.354255  320967 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 171.345µs
	I0717 19:17:04.354273  320967 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0717 19:17:04.354111  320967 cache.go:107] acquiring lock: {Name:mk99778cf263ded15bef16af944ba7e5e1c2f1a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:17:04.354262  320967 cache.go:107] acquiring lock: {Name:mkd892d265197bba9d74c85569bdbefabd7a9143 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:17:04.354309  320967 cache.go:115] /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I0717 19:17:04.354200  320967 cache.go:107] acquiring lock: {Name:mkd71aeba8a963da4395dc7d2ffea751af49e924 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:17:04.354263  320967 cache.go:115] /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0717 19:17:04.354373  320967 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 232.908µs
	I0717 19:17:04.354385  320967 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0717 19:17:04.354319  320967 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 221.217µs
	I0717 19:17:04.354389  320967 cache.go:115] /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I0717 19:17:04.354396  320967 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I0717 19:17:04.354396  320967 cache.go:115] /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I0717 19:17:04.354381  320967 cache.go:115] /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I0717 19:17:04.354405  320967 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 226.076µs
	I0717 19:17:04.354425  320967 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I0717 19:17:04.354420  320967 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 288.46µs
	I0717 19:17:04.354440  320967 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I0717 19:17:04.354422  320967 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 206.181µs
	I0717 19:17:04.354451  320967 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I0717 19:17:04.354388  320967 cache.go:195] Successfully downloaded all kic artifacts
	I0717 19:17:04.354484  320967 start.go:365] acquiring machines lock for missing-upgrade-629154: {Name:mk53dbc5c92f6c951a7a8d7b78be05ad027a74a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:17:04.354264  320967 cache.go:115] /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I0717 19:17:04.354532  320967 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 329.473µs
	I0717 19:17:04.354546  320967 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I0717 19:17:04.354302  320967 cache.go:107] acquiring lock: {Name:mk0626aa4c32952c38431bc57a3be6531c251df4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:17:04.354601  320967 cache.go:115] /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I0717 19:17:04.354614  320967 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 355.291µs
	I0717 19:17:04.354628  320967 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I0717 19:17:04.354624  320967 start.go:369] acquired machines lock for "missing-upgrade-629154" in 119.135µs
	I0717 19:17:04.354639  320967 cache.go:87] Successfully saved all images to host disk.
	I0717 19:17:04.354652  320967 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:17:04.354669  320967 fix.go:54] fixHost starting: m01
	I0717 19:17:04.354889  320967 cli_runner.go:164] Run: docker container inspect missing-upgrade-629154 --format={{.State.Status}}
	W0717 19:17:04.371087  320967 cli_runner.go:211] docker container inspect missing-upgrade-629154 --format={{.State.Status}} returned with exit code 1
	I0717 19:17:04.371160  320967 fix.go:102] recreateIfNeeded on missing-upgrade-629154: state= err=unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:04.371188  320967 fix.go:107] machineExists: false. err=machine does not exist
	I0717 19:17:04.374469  320967 out.go:177] * docker "missing-upgrade-629154" container is missing, will recreate.
	I0717 19:17:03.751876  318888 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:17:03.751901  318888 machine.go:91] provisioned docker machine in 6.025236942s
	I0717 19:17:03.751912  318888 start.go:300] post-start starting for "pause-795576" (driver="docker")
	I0717 19:17:03.751926  318888 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:17:03.752022  318888 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:17:03.752071  318888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-795576
	I0717 19:17:03.768715  318888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32948 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/pause-795576/id_rsa Username:docker}
	I0717 19:17:03.861844  318888 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:17:03.865227  318888 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 19:17:03.865254  318888 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 19:17:03.865262  318888 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 19:17:03.865268  318888 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 19:17:03.865279  318888 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-138069/.minikube/addons for local assets ...
	I0717 19:17:03.865329  318888 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-138069/.minikube/files for local assets ...
	I0717 19:17:03.865393  318888 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem -> 1448222.pem in /etc/ssl/certs
	I0717 19:17:03.865471  318888 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:17:03.873647  318888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem --> /etc/ssl/certs/1448222.pem (1708 bytes)
	I0717 19:17:03.899224  318888 start.go:303] post-start completed in 147.294688ms
	I0717 19:17:03.899306  318888 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 19:17:03.899354  318888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-795576
	I0717 19:17:03.918665  318888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32948 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/pause-795576/id_rsa Username:docker}
	I0717 19:17:04.008910  318888 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 19:17:04.013412  318888 fix.go:56] fixHost completed within 6.308108921s
	I0717 19:17:04.013439  318888 start.go:83] releasing machines lock for "pause-795576", held for 6.308164281s
	I0717 19:17:04.013588  318888 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-795576
	I0717 19:17:04.031642  318888 ssh_runner.go:195] Run: cat /version.json
	I0717 19:17:04.031705  318888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-795576
	I0717 19:17:04.031650  318888 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:17:04.031842  318888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-795576
	I0717 19:17:04.051022  318888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32948 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/pause-795576/id_rsa Username:docker}
	I0717 19:17:04.051420  318888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32948 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/pause-795576/id_rsa Username:docker}
	W0717 19:17:04.265682  318888 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 19:17:04.265776  318888 ssh_runner.go:195] Run: systemctl --version
	I0717 19:17:04.270368  318888 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:17:04.420251  318888 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 19:17:04.425006  318888 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:17:04.433398  318888 cni.go:227] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0717 19:17:04.433471  318888 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:17:04.442675  318888 cni.go:265] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 19:17:04.442700  318888 start.go:469] detecting cgroup driver to use...
	I0717 19:17:04.442745  318888 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 19:17:04.442790  318888 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:17:04.456211  318888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:17:04.466899  318888 docker.go:196] disabling cri-docker service (if available) ...
	I0717 19:17:04.466954  318888 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:17:04.480212  318888 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:17:04.491452  318888 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:17:04.595500  318888 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:17:04.708034  318888 docker.go:212] disabling docker service ...
	I0717 19:17:04.708096  318888 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:17:04.719635  318888 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:17:04.729876  318888 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:17:04.906491  318888 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:17:05.364372  318888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:17:05.379441  318888 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:17:05.399739  318888 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:17:05.399818  318888 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:05.473884  318888 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:17:05.473956  318888 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:05.485450  318888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:05.496941  318888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:05.561686  318888 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:17:05.571827  318888 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:17:05.581021  318888 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:17:05.589445  318888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:17:05.891923  318888 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:17:06.202089  318888 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:17:06.202151  318888 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:17:06.205824  318888 start.go:537] Will wait 60s for crictl version
	I0717 19:17:06.205882  318888 ssh_runner.go:195] Run: which crictl
	I0717 19:17:06.209229  318888 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:17:06.244306  318888 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0717 19:17:06.244386  318888 ssh_runner.go:195] Run: crio --version
	I0717 19:17:06.281634  318888 ssh_runner.go:195] Run: crio --version
	I0717 19:17:06.319722  318888 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.6 ...
	I0717 19:17:02.117237  319694 cli_runner.go:164] Run: docker start kubernetes-upgrade-677764
	I0717 19:17:02.435109  319694 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-677764 --format={{.State.Status}}
	I0717 19:17:02.452218  319694 kic.go:426] container "kubernetes-upgrade-677764" state is running.
	I0717 19:17:02.452677  319694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-677764
	I0717 19:17:02.471422  319694 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/kubernetes-upgrade-677764/config.json ...
	I0717 19:17:02.471859  319694 machine.go:88] provisioning docker machine ...
	I0717 19:17:02.471882  319694 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-677764"
	I0717 19:17:02.471930  319694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-677764
	I0717 19:17:02.491565  319694 main.go:141] libmachine: Using SSH client type: native
	I0717 19:17:02.492338  319694 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32974 <nil> <nil>}
	I0717 19:17:02.492369  319694 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-677764 && echo "kubernetes-upgrade-677764" | sudo tee /etc/hostname
	I0717 19:17:02.492937  319694 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45724->127.0.0.1:32974: read: connection reset by peer
	I0717 19:17:05.635399  319694 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-677764
	
	I0717 19:17:05.635473  319694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-677764
	I0717 19:17:05.653320  319694 main.go:141] libmachine: Using SSH client type: native
	I0717 19:17:05.653845  319694 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32974 <nil> <nil>}
	I0717 19:17:05.653875  319694 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-677764' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-677764/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-677764' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:17:05.784528  319694 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:17:05.784609  319694 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16890-138069/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-138069/.minikube}
	I0717 19:17:05.784645  319694 ubuntu.go:177] setting up certificates
	I0717 19:17:05.784658  319694 provision.go:83] configureAuth start
	I0717 19:17:05.784725  319694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-677764
	I0717 19:17:05.807192  319694 provision.go:138] copyHostCerts
	I0717 19:17:05.807273  319694 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-138069/.minikube/ca.pem, removing ...
	I0717 19:17:05.807285  319694 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-138069/.minikube/ca.pem
	I0717 19:17:05.807368  319694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-138069/.minikube/ca.pem (1078 bytes)
	I0717 19:17:05.807478  319694 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-138069/.minikube/cert.pem, removing ...
	I0717 19:17:05.807488  319694 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-138069/.minikube/cert.pem
	I0717 19:17:05.807531  319694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-138069/.minikube/cert.pem (1123 bytes)
	I0717 19:17:05.807601  319694 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-138069/.minikube/key.pem, removing ...
	I0717 19:17:05.807612  319694 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-138069/.minikube/key.pem
	I0717 19:17:05.807662  319694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-138069/.minikube/key.pem (1675 bytes)
	I0717 19:17:05.807793  319694 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-677764 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-677764]
	I0717 19:17:05.964068  319694 provision.go:172] copyRemoteCerts
	I0717 19:17:05.964155  319694 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:17:05.964206  319694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-677764
	I0717 19:17:05.984658  319694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32974 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/kubernetes-upgrade-677764/id_rsa Username:docker}
	I0717 19:17:06.082090  319694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:17:06.106944  319694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0717 19:17:06.133035  319694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 19:17:06.157656  319694 provision.go:86] duration metric: configureAuth took 372.978299ms
	I0717 19:17:06.157692  319694 ubuntu.go:193] setting minikube options for container-runtime
	I0717 19:17:06.157922  319694 config.go:182] Loaded profile config "kubernetes-upgrade-677764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:17:06.158053  319694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-677764
	I0717 19:17:06.178158  319694 main.go:141] libmachine: Using SSH client type: native
	I0717 19:17:06.178846  319694 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32974 <nil> <nil>}
	I0717 19:17:06.178880  319694 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:17:06.469943  319694 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:17:06.469974  319694 machine.go:91] provisioned docker machine in 3.998099493s
	I0717 19:17:06.469987  319694 start.go:300] post-start starting for "kubernetes-upgrade-677764" (driver="docker")
	I0717 19:17:06.469999  319694 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:17:06.470080  319694 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:17:06.470131  319694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-677764
	I0717 19:17:06.497657  319694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32974 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/kubernetes-upgrade-677764/id_rsa Username:docker}
	I0717 19:17:06.593084  319694 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:17:06.596142  319694 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 19:17:06.596174  319694 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 19:17:06.596182  319694 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 19:17:06.596189  319694 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 19:17:06.596205  319694 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-138069/.minikube/addons for local assets ...
	I0717 19:17:06.596265  319694 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-138069/.minikube/files for local assets ...
	I0717 19:17:06.596352  319694 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem -> 1448222.pem in /etc/ssl/certs
	I0717 19:17:06.596461  319694 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:17:06.605044  319694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem --> /etc/ssl/certs/1448222.pem (1708 bytes)
	I0717 19:17:06.628368  319694 start.go:303] post-start completed in 158.364655ms
	I0717 19:17:06.628472  319694 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 19:17:06.628520  319694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-677764
	I0717 19:17:06.646743  319694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32974 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/kubernetes-upgrade-677764/id_rsa Username:docker}
	I0717 19:17:06.737085  319694 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 19:17:06.741399  319694 fix.go:56] fixHost completed within 4.655720225s
	I0717 19:17:06.741423  319694 start.go:83] releasing machines lock for "kubernetes-upgrade-677764", held for 4.655764905s
	I0717 19:17:06.741515  319694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-677764
	I0717 19:17:06.760401  319694 ssh_runner.go:195] Run: cat /version.json
	I0717 19:17:06.760443  319694 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:17:06.760454  319694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-677764
	I0717 19:17:06.760520  319694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-677764
	I0717 19:17:06.780209  319694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32974 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/kubernetes-upgrade-677764/id_rsa Username:docker}
	I0717 19:17:06.781542  319694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32974 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/kubernetes-upgrade-677764/id_rsa Username:docker}
	I0717 19:17:06.321748  318888 cli_runner.go:164] Run: docker network inspect pause-795576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 19:17:06.340215  318888 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0717 19:17:06.344375  318888 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 19:17:06.344426  318888 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:17:06.385379  318888 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 19:17:06.385403  318888 crio.go:415] Images already preloaded, skipping extraction
	I0717 19:17:06.385455  318888 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:17:06.421516  318888 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 19:17:06.421539  318888 cache_images.go:84] Images are preloaded, skipping loading
	I0717 19:17:06.421596  318888 ssh_runner.go:195] Run: crio config
	I0717 19:17:06.465827  318888 cni.go:84] Creating CNI manager for ""
	I0717 19:17:06.465852  318888 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 19:17:06.465871  318888 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 19:17:06.465889  318888 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-795576 NodeName:pause-795576 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:17:06.466031  318888 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-795576"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:17:06.466104  318888 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=pause-795576 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:pause-795576 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 19:17:06.466152  318888 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 19:17:06.476366  318888 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:17:06.476449  318888 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:17:06.488299  318888 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (422 bytes)
	I0717 19:17:06.508305  318888 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:17:06.527773  318888 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
	I0717 19:17:06.548000  318888 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0717 19:17:06.551677  318888 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/pause-795576 for IP: 192.168.67.2
	I0717 19:17:06.551715  318888 certs.go:190] acquiring lock for shared ca certs: {Name:mk42196ce59710ebf500640671660e2f4656c84e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:17:06.551876  318888 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16890-138069/.minikube/ca.key
	I0717 19:17:06.551932  318888 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16890-138069/.minikube/proxy-client-ca.key
	I0717 19:17:06.552042  318888 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/pause-795576/client.key
	I0717 19:17:06.552136  318888 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/pause-795576/apiserver.key.c7fa3a9e
	I0717 19:17:06.552197  318888 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/pause-795576/proxy-client.key
	I0717 19:17:06.552352  318888 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/144822.pem (1338 bytes)
	W0717 19:17:06.552396  318888 certs.go:433] ignoring /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/144822_empty.pem, impossibly tiny 0 bytes
	I0717 19:17:06.552412  318888 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:17:06.552450  318888 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem (1078 bytes)
	I0717 19:17:06.552495  318888 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:17:06.552528  318888 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/key.pem (1675 bytes)
	I0717 19:17:06.552574  318888 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem (1708 bytes)
	I0717 19:17:06.553429  318888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/pause-795576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 19:17:06.579049  318888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/pause-795576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 19:17:06.602577  318888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/pause-795576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:17:06.626285  318888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/pause-795576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 19:17:06.650369  318888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:17:06.673739  318888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 19:17:06.698501  318888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:17:06.721122  318888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 19:17:06.744207  318888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:17:06.770536  318888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/certs/144822.pem --> /usr/share/ca-certificates/144822.pem (1338 bytes)
	I0717 19:17:06.795889  318888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem --> /usr/share/ca-certificates/1448222.pem (1708 bytes)
	I0717 19:17:06.818698  318888 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:17:06.836903  318888 ssh_runner.go:195] Run: openssl version
	I0717 19:17:06.842176  318888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:17:06.850633  318888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:17:06.853922  318888 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:46 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:17:06.853979  318888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:17:06.860335  318888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:17:06.869169  318888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144822.pem && ln -fs /usr/share/ca-certificates/144822.pem /etc/ssl/certs/144822.pem"
	I0717 19:17:06.880988  318888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144822.pem
	I0717 19:17:06.885329  318888 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:51 /usr/share/ca-certificates/144822.pem
	I0717 19:17:06.885399  318888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144822.pem
	I0717 19:17:06.892308  318888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144822.pem /etc/ssl/certs/51391683.0"
	I0717 19:17:06.901646  318888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1448222.pem && ln -fs /usr/share/ca-certificates/1448222.pem /etc/ssl/certs/1448222.pem"
	I0717 19:17:06.912255  318888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1448222.pem
	I0717 19:17:06.915775  318888 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:51 /usr/share/ca-certificates/1448222.pem
	I0717 19:17:06.915830  318888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1448222.pem
	I0717 19:17:06.922733  318888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1448222.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:17:06.931816  318888 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 19:17:06.935362  318888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:17:06.941436  318888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:17:06.947993  318888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:17:06.954695  318888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:17:06.962346  318888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:17:06.969081  318888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:17:06.977340  318888 kubeadm.go:404] StartCluster: {Name:pause-795576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:pause-795576 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage
-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:17:06.977502  318888 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:17:06.977552  318888 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:17:07.020731  318888 cri.go:89] found id: "3c4ecc96bf0b93221b1cd41bd5f1526256df3f33b13568ef0d2794d1eb83eca3"
	I0717 19:17:07.020755  318888 cri.go:89] found id: "d061dfba20f3f98ca0245fe20749ce94924338455dffdddfee06f3279e0c6ff8"
	I0717 19:17:07.020762  318888 cri.go:89] found id: "f6dec920e96cf327838fec0d74079f5434ac14535a8435b632e013e75fdb381f"
	I0717 19:17:07.020768  318888 cri.go:89] found id: "883de01fb9bb917f2b2c7d622bacd82c691a1416de9b63dd431f017be84c6eee"
	I0717 19:17:07.020773  318888 cri.go:89] found id: "e10be0e9af17f8987e24e478a481f2c0799874fb1457f4c96c792fa327c1e1fe"
	I0717 19:17:07.020778  318888 cri.go:89] found id: "6c2d784dbdd1891bc8ee8488658197313688eb7036abdb4530230dcf2eccb3fa"
	I0717 19:17:07.020784  318888 cri.go:89] found id: "249f2d6748858a003aca4fb5b25038d813f220e6af0a35223e06266835417555"
	I0717 19:17:07.020789  318888 cri.go:89] found id: ""
	I0717 19:17:07.020836  318888 ssh_runner.go:195] Run: sudo runc list -f json
	I0717 19:17:07.047824  318888 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"249f2d6748858a003aca4fb5b25038d813f220e6af0a35223e06266835417555","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/249f2d6748858a003aca4fb5b25038d813f220e6af0a35223e06266835417555/userdata","rootfs":"/var/lib/containers/storage/overlay/fd2aa9b207f49e48d0ff362c959bf5688104dfd9f16135423c4718b9aeebc107/merged","created":"2023-07-17T19:16:23.821869064Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ef1f98f0","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ef1f98f0\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMe
ssagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"249f2d6748858a003aca4fb5b25038d813f220e6af0a35223e06266835417555","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-07-17T19:16:23.737826652Z","io.kubernetes.cri-o.Image":"b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20230511-dc714da8","io.kubernetes.cri-o.ImageRef":"b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-blwth\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"7367b120-9ad2-48ef-a098-f9427cd70ce7\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-blwth_7367b120-9ad2-48ef-a098-f9427cd70ce7/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":
"/var/lib/containers/storage/overlay/fd2aa9b207f49e48d0ff362c959bf5688104dfd9f16135423c4718b9aeebc107/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-blwth_kube-system_7367b120-9ad2-48ef-a098-f9427cd70ce7_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/52b5cc4aad2ad9be691effa49714cc8f6b39045961a40662dd74c5acc9780241/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"52b5cc4aad2ad9be691effa49714cc8f6b39045961a40662dd74c5acc9780241","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-blwth_kube-system_7367b120-9ad2-48ef-a098-f9427cd70ce7_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"se
linux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/7367b120-9ad2-48ef-a098-f9427cd70ce7/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/7367b120-9ad2-48ef-a098-f9427cd70ce7/containers/kindnet-cni/0dcfda50\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/7367b120-9ad2-48ef-a098-f9427cd70ce7/volumes/kubernetes.io~projected/kube-api-access-cl564\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-blwth","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"7367b120-9ad2-48ef-a098-f9427cd70ce7"
,"kubernetes.io/config.seen":"2023-07-17T19:16:23.332665951Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3c4ecc96bf0b93221b1cd41bd5f1526256df3f33b13568ef0d2794d1eb83eca3","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/3c4ecc96bf0b93221b1cd41bd5f1526256df3f33b13568ef0d2794d1eb83eca3/userdata","rootfs":"/var/lib/containers/storage/overlay/984e04278704013a855ebd140b487dec437c7f2b88ad66ca0ad0ae3ccf7a5795/merged","created":"2023-07-17T19:17:05.090088778Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"88ae6cec","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"88ae6cec\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/de
v/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"3c4ecc96bf0b93221b1cd41bd5f1526256df3f33b13568ef0d2794d1eb83eca3","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-07-17T19:17:04.97761483Z","io.kubernetes.cri-o.Image":"08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.27.3","io.kubernetes.cri-o.ImageRef":"08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-795576\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b854cc24c9327d52e830e509c0b45f70\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-795576_b854cc24c9327d52e830e509c0b45f70/kube-apiserver/1.log","io.kubernetes.c
ri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/984e04278704013a855ebd140b487dec437c7f2b88ad66ca0ad0ae3ccf7a5795/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-795576_kube-system_b854cc24c9327d52e830e509c0b45f70_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a59836c7236b4631707596f0175cb8e9117fee3121c48eec6988cf1f1d7d14d4/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"a59836c7236b4631707596f0175cb8e9117fee3121c48eec6988cf1f1d7d14d4","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-795576_kube-system_b854cc24c9327d52e830e509c0b45f70_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b854cc24c9327d52e830e509c0b45f70/co
ntainers/kube-apiserver/e2c9bf0d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b854cc24c9327d52e830e509c0b45f70/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"sel
inux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-pause-795576","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b854cc24c9327d52e830e509c0b45f70","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"b854cc24c9327d52e830e509c0b45f70","kubernetes.io/config.seen":"2023-07-17T19:16:00.717752512Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6c2d784dbdd1891bc8ee8488658197313688eb7036abdb4530230dcf2eccb3fa","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/6c2d784dbdd1891bc8ee8488658197313688eb7036abdb4530230dcf2eccb3fa/userdata","rootfs":"/var/lib/containers/storage/overlay/de01942e3c1323fdb872b4cd4d75c3b8f377b3b580bae9af93589ce307c636f7/merged","created":"2023-07-17T19:16:23.870191032Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"47638398","io.kubernetes.container.na
me":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"47638398\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"6c2d784dbdd1891bc8ee8488658197313688eb7036abdb4530230dcf2eccb3fa","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-07-17T19:16:23.762428287Z","io.kubernetes.cri-o.Image":"5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.27.3","io.kubernetes.cri-o.ImageRef":"5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c","io.kubernetes.cri-o.Labels":"{\"io.kub
ernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-vcv28\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"543aec10-6af6-4088-941a-d684da877b3f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-vcv28_543aec10-6af6-4088-941a-d684da877b3f/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/de01942e3c1323fdb872b4cd4d75c3b8f377b3b580bae9af93589ce307c636f7/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-vcv28_kube-system_543aec10-6af6-4088-941a-d684da877b3f_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/be9d0f26dd7c3ab191a5abf36da714632cbd0f3cda9ce14b052bad43e9c67620/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"be9d0f26dd7c3ab191a5abf36da714632cbd0f3cda9ce14b052bad43e9c67620","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-vcv28_kube-system_543aec10-6af6-4088-941a-d684da877b3f_0","io.k
ubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/543aec10-6af6-4088-941a-d684da877b3f/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/543aec10-6af6-4088-941a-d684da877b3f/containers/kube-proxy/0aa973d9\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/543aec10-6af6-4088-941a-d684da877b3f/volumes/kubernetes.io~configmap/kube-proxy\",\"r
eadonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/543aec10-6af6-4088-941a-d684da877b3f/volumes/kubernetes.io~projected/kube-api-access-hh7kg\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-vcv28","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"543aec10-6af6-4088-941a-d684da877b3f","kubernetes.io/config.seen":"2023-07-17T19:16:23.331165044Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"883de01fb9bb917f2b2c7d622bacd82c691a1416de9b63dd431f017be84c6eee","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/883de01fb9bb917f2b2c7d622bacd82c691a1416de9b63dd431f017be84c6eee/userdata","rootfs":"/var/lib/containers/storage/overlay/642a656c15db83e8d642e2962a223fbbd43a29afb57a204f39604a8ee358de79/merged","created":"20
23-07-17T19:17:04.995896916Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"159e1046","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"159e1046\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"883de01fb9bb917f2b2c7d622bacd82c691a1416de9b63dd431f017be84c6eee","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-07-17T19:17:04.896943836Z","io.kubernetes.cri-o.Image":"41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-sch
eduler:v1.27.3","io.kubernetes.cri-o.ImageRef":"41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-795576\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"400d9ca1adcedd07ea455c43546148bb\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-795576_400d9ca1adcedd07ea455c43546148bb/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/642a656c15db83e8d642e2962a223fbbd43a29afb57a204f39604a8ee358de79/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-795576_kube-system_400d9ca1adcedd07ea455c43546148bb_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/8c0aa2dd28d39ba50cfb2072a76b03120b8e0f39d2e7bd70d851fe70c79305ce/userdata/resolv.conf","io.kube
rnetes.cri-o.SandboxID":"8c0aa2dd28d39ba50cfb2072a76b03120b8e0f39d2e7bd70d851fe70c79305ce","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-795576_kube-system_400d9ca1adcedd07ea455c43546148bb_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/400d9ca1adcedd07ea455c43546148bb/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/400d9ca1adcedd07ea455c43546148bb/containers/kube-scheduler/2ab0d4ac\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-pause-79
5576","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"400d9ca1adcedd07ea455c43546148bb","kubernetes.io/config.hash":"400d9ca1adcedd07ea455c43546148bb","kubernetes.io/config.seen":"2023-07-17T19:16:00.717755611Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d061dfba20f3f98ca0245fe20749ce94924338455dffdddfee06f3279e0c6ff8","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/d061dfba20f3f98ca0245fe20749ce94924338455dffdddfee06f3279e0c6ff8/userdata","rootfs":"/var/lib/containers/storage/overlay/efdd45245e1d01175642c7e1fe9efdd38e2efc70b57415517262a57f4d2a71a1/merged","created":"2023-07-17T19:17:05.080545496Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"97f28112","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes
.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"97f28112\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"d061dfba20f3f98ca0245fe20749ce94924338455dffdddfee06f3279e0c6ff8","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-07-17T19:17:04.966773768Z","io.kubernetes.cri-o.Image":"7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.27.3","io.kubernetes.cri-o.ImageRef":"7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-795576\",\"io.kuberne
tes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"1694f1546c77512884d0dfe3bf2a4ba0\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-795576_1694f1546c77512884d0dfe3bf2a4ba0/kube-controller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/efdd45245e1d01175642c7e1fe9efdd38e2efc70b57415517262a57f4d2a71a1/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-795576_kube-system_1694f1546c77512884d0dfe3bf2a4ba0_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/20ae7a52e858945587eb7f163d34f79bc2b9a6ce18aad1af8d65006001a8854c/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"20ae7a52e858945587eb7f163d34f79bc2b9a6ce18aad1af8d65006001a8854c","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-795576_kube-system_1694f1546c77512884d0dfe3bf2a4ba0_0","io.kube
rnetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/1694f1546c77512884d0dfe3bf2a4ba0/containers/kube-controller-manager/8a7a154a\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/1694f1546c77512884d0dfe3bf2a4ba0/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\
"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-795576","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"1694f1546c77512884d0dfe3bf2a4ba0","kubernetes.io/config.hash":"1694f1546c77512884d0dfe3bf2a4ba0","kubern
etes.io/config.seen":"2023-07-17T19:16:00.717754116Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e10be0e9af17f8987e24e478a481f2c0799874fb1457f4c96c792fa327c1e1fe","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/e10be0e9af17f8987e24e478a481f2c0799874fb1457f4c96c792fa327c1e1fe/userdata","rootfs":"/var/lib/containers/storage/overlay/b3bae204884484a6b35550971ac8a6e805769241762f4c3a7c9c308965995a04/merged","created":"2023-07-17T19:16:55.408378152Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"5bffbcbc","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.ter
minationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"5bffbcbc\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"e10be0e9af17f8987e24e478a481f2c0799874fb1457f4c96c792fa327c1e1fe","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-07-17T19:16:55.363851867Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","io.kubernetes.cri-o.ImageName":"
registry.k8s.io/coredns/coredns:v1.10.1","io.kubernetes.cri-o.ImageRef":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-5d78c9869d-7bhk2\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"113dbc11-1279-4188-b57f-ef1a7476354e\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-5d78c9869d-7bhk2_113dbc11-1279-4188-b57f-ef1a7476354e/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b3bae204884484a6b35550971ac8a6e805769241762f4c3a7c9c308965995a04/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-5d78c9869d-7bhk2_kube-system_113dbc11-1279-4188-b57f-ef1a7476354e_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/d649adf698c9dafde02b8a12fb695beb81795107e7d027d64cadfd235bb2ac80/userdata/resolv.conf","io.kubernetes.cri-o.S
andboxID":"d649adf698c9dafde02b8a12fb695beb81795107e7d027d64cadfd235bb2ac80","io.kubernetes.cri-o.SandboxName":"k8s_coredns-5d78c9869d-7bhk2_kube-system_113dbc11-1279-4188-b57f-ef1a7476354e_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/113dbc11-1279-4188-b57f-ef1a7476354e/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/113dbc11-1279-4188-b57f-ef1a7476354e/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/113dbc11-1279-4188-b57f-ef1a7476354e/containers/coredns/8db12501\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"
/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/113dbc11-1279-4188-b57f-ef1a7476354e/volumes/kubernetes.io~projected/kube-api-access-k8bj6\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-5d78c9869d-7bhk2","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"113dbc11-1279-4188-b57f-ef1a7476354e","kubernetes.io/config.seen":"2023-07-17T19:16:54.973191308Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f6dec920e96cf327838fec0d74079f5434ac14535a8435b632e013e75fdb381f","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/f6dec920e96cf327838fec0d74079f5434ac14535a8435b632e013e75fdb381f/userdata","rootfs":"/var/lib/containers/storage/overlay/346787948b55ebcb618ece9de2dd56e22018cfd47ef405d4603a6f740a88967c/merged","created":"2023-07-17T19:17:05.075290261Z","annotations":{"io.container.manager":"cri-o
","io.kubernetes.container.hash":"95733f07","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"95733f07\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"f6dec920e96cf327838fec0d74079f5434ac14535a8435b632e013e75fdb381f","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-07-17T19:17:04.925145157Z","io.kubernetes.cri-o.Image":"86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.7-0","io.kubernetes.cri-o.ImageRef":"86b6af7dd652c1b38118be1c338e9354b33469e69a218f
7e290a0ca5304ad681","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-795576\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"d64400546f98bb129596be581950ced8\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-795576_d64400546f98bb129596be581950ced8/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/346787948b55ebcb618ece9de2dd56e22018cfd47ef405d4603a6f740a88967c/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-795576_kube-system_d64400546f98bb129596be581950ced8_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/977426b5ad0d404749f9b90f6b18505fa16b074792252144b0b36642498b9e5c/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"977426b5ad0d404749f9b90f6b18505fa16b074792252144b0b36642498b9e5c","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-795576_kube-system_d644
00546f98bb129596be581950ced8_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/d64400546f98bb129596be581950ced8/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/d64400546f98bb129596be581950ced8/containers/etcd/cf0ef901\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-pause-795576","io.kubernetes.pod.namespace":"kube-sys
tem","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"d64400546f98bb129596be581950ced8","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"d64400546f98bb129596be581950ced8","kubernetes.io/config.seen":"2023-07-17T19:16:00.717746912Z","kubernetes.io/config.source":"file"},"owner":"root"}]
	I0717 19:17:07.048354  318888 cri.go:126] list returned 7 containers
	I0717 19:17:07.048372  318888 cri.go:129] container: {ID:249f2d6748858a003aca4fb5b25038d813f220e6af0a35223e06266835417555 Status:stopped}
	I0717 19:17:07.048392  318888 cri.go:135] skipping {249f2d6748858a003aca4fb5b25038d813f220e6af0a35223e06266835417555 stopped}: state = "stopped", want "paused"
	I0717 19:17:07.048406  318888 cri.go:129] container: {ID:3c4ecc96bf0b93221b1cd41bd5f1526256df3f33b13568ef0d2794d1eb83eca3 Status:stopped}
	I0717 19:17:07.048419  318888 cri.go:135] skipping {3c4ecc96bf0b93221b1cd41bd5f1526256df3f33b13568ef0d2794d1eb83eca3 stopped}: state = "stopped", want "paused"
	I0717 19:17:07.048429  318888 cri.go:129] container: {ID:6c2d784dbdd1891bc8ee8488658197313688eb7036abdb4530230dcf2eccb3fa Status:stopped}
	I0717 19:17:07.048437  318888 cri.go:135] skipping {6c2d784dbdd1891bc8ee8488658197313688eb7036abdb4530230dcf2eccb3fa stopped}: state = "stopped", want "paused"
	I0717 19:17:07.048447  318888 cri.go:129] container: {ID:883de01fb9bb917f2b2c7d622bacd82c691a1416de9b63dd431f017be84c6eee Status:stopped}
	I0717 19:17:07.048460  318888 cri.go:135] skipping {883de01fb9bb917f2b2c7d622bacd82c691a1416de9b63dd431f017be84c6eee stopped}: state = "stopped", want "paused"
	I0717 19:17:07.048475  318888 cri.go:129] container: {ID:d061dfba20f3f98ca0245fe20749ce94924338455dffdddfee06f3279e0c6ff8 Status:stopped}
	I0717 19:17:07.048486  318888 cri.go:135] skipping {d061dfba20f3f98ca0245fe20749ce94924338455dffdddfee06f3279e0c6ff8 stopped}: state = "stopped", want "paused"
	I0717 19:17:07.048493  318888 cri.go:129] container: {ID:e10be0e9af17f8987e24e478a481f2c0799874fb1457f4c96c792fa327c1e1fe Status:stopped}
	I0717 19:17:07.048505  318888 cri.go:135] skipping {e10be0e9af17f8987e24e478a481f2c0799874fb1457f4c96c792fa327c1e1fe stopped}: state = "stopped", want "paused"
	I0717 19:17:07.048515  318888 cri.go:129] container: {ID:f6dec920e96cf327838fec0d74079f5434ac14535a8435b632e013e75fdb381f Status:stopped}
	I0717 19:17:07.048523  318888 cri.go:135] skipping {f6dec920e96cf327838fec0d74079f5434ac14535a8435b632e013e75fdb381f stopped}: state = "stopped", want "paused"
	I0717 19:17:07.048577  318888 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:17:07.059554  318888 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 19:17:07.059576  318888 kubeadm.go:636] restartCluster start
	I0717 19:17:07.059630  318888 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:17:07.069388  318888 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:07.070401  318888 kubeconfig.go:92] found "pause-795576" server: "https://192.168.67.2:8443"
	I0717 19:17:07.071928  318888 kapi.go:59] client config for pause-795576: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/profiles/pause-795576/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/profiles/pause-795576/client.key", CAFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 19:17:07.072899  318888 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:17:07.081588  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:07.081647  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:07.091239  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	W0717 19:17:06.973034  319694 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 19:17:06.973134  319694 ssh_runner.go:195] Run: systemctl --version
	I0717 19:17:06.978177  319694 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:17:07.122315  319694 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 19:17:07.127418  319694 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:17:07.137647  319694 cni.go:227] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0717 19:17:07.137731  319694 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:17:07.145955  319694 cni.go:265] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 19:17:07.145978  319694 start.go:469] detecting cgroup driver to use...
	I0717 19:17:07.146010  319694 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 19:17:07.146059  319694 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:17:07.156888  319694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:17:07.167672  319694 docker.go:196] disabling cri-docker service (if available) ...
	I0717 19:17:07.167719  319694 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:17:07.179907  319694 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:17:07.190272  319694 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:17:07.264921  319694 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:17:07.340775  319694 docker.go:212] disabling docker service ...
	I0717 19:17:07.340853  319694 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:17:07.353566  319694 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:17:07.364478  319694 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:17:07.430548  319694 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:17:07.507034  319694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:17:07.517521  319694 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:17:07.532966  319694 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:17:07.533024  319694 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:07.542853  319694 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:17:07.542927  319694 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:07.551715  319694 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:07.560532  319694 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:07.569414  319694 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:17:07.578293  319694 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:17:07.588353  319694 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:17:07.597167  319694 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:17:07.669313  319694 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:17:07.781188  319694 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:17:07.781266  319694 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:17:07.784741  319694 start.go:537] Will wait 60s for crictl version
	I0717 19:17:07.784801  319694 ssh_runner.go:195] Run: which crictl
	I0717 19:17:07.787919  319694 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:17:07.821326  319694 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0717 19:17:07.821414  319694 ssh_runner.go:195] Run: crio --version
	I0717 19:17:07.855779  319694 ssh_runner.go:195] Run: crio --version
	I0717 19:17:07.896159  319694 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.6 ...
	I0717 19:17:04.376078  320967 delete.go:124] DEMOLISHING missing-upgrade-629154 ...
	I0717 19:17:04.376210  320967 cli_runner.go:164] Run: docker container inspect missing-upgrade-629154 --format={{.State.Status}}
	W0717 19:17:04.391633  320967 cli_runner.go:211] docker container inspect missing-upgrade-629154 --format={{.State.Status}} returned with exit code 1
	W0717 19:17:04.391699  320967 stop.go:75] unable to get state: unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:04.391718  320967 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:04.392086  320967 cli_runner.go:164] Run: docker container inspect missing-upgrade-629154 --format={{.State.Status}}
	W0717 19:17:04.407621  320967 cli_runner.go:211] docker container inspect missing-upgrade-629154 --format={{.State.Status}} returned with exit code 1
	I0717 19:17:04.407712  320967 delete.go:82] Unable to get host status for missing-upgrade-629154, assuming it has already been deleted: state: unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:04.407773  320967 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-629154
	W0717 19:17:04.423177  320967 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-629154 returned with exit code 1
	I0717 19:17:04.423219  320967 kic.go:367] could not find the container missing-upgrade-629154 to remove it. will try anyways
	I0717 19:17:04.423266  320967 cli_runner.go:164] Run: docker container inspect missing-upgrade-629154 --format={{.State.Status}}
	W0717 19:17:04.441371  320967 cli_runner.go:211] docker container inspect missing-upgrade-629154 --format={{.State.Status}} returned with exit code 1
	W0717 19:17:04.441433  320967 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:04.441489  320967 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-629154 /bin/bash -c "sudo init 0"
	W0717 19:17:04.457872  320967 cli_runner.go:211] docker exec --privileged -t missing-upgrade-629154 /bin/bash -c "sudo init 0" returned with exit code 1
	I0717 19:17:04.457923  320967 oci.go:647] error shutdown missing-upgrade-629154: docker exec --privileged -t missing-upgrade-629154 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:05.458123  320967 cli_runner.go:164] Run: docker container inspect missing-upgrade-629154 --format={{.State.Status}}
	W0717 19:17:05.481145  320967 cli_runner.go:211] docker container inspect missing-upgrade-629154 --format={{.State.Status}} returned with exit code 1
	I0717 19:17:05.481233  320967 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:05.481247  320967 oci.go:661] temporary error: container missing-upgrade-629154 status is  but expect it to be exited
	I0717 19:17:05.481288  320967 retry.go:31] will retry after 430.085087ms: couldn't verify container is exited. %!v(MISSING): unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:05.911760  320967 cli_runner.go:164] Run: docker container inspect missing-upgrade-629154 --format={{.State.Status}}
	W0717 19:17:05.929551  320967 cli_runner.go:211] docker container inspect missing-upgrade-629154 --format={{.State.Status}} returned with exit code 1
	I0717 19:17:05.929629  320967 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:05.929642  320967 oci.go:661] temporary error: container missing-upgrade-629154 status is  but expect it to be exited
	I0717 19:17:05.929680  320967 retry.go:31] will retry after 610.025992ms: couldn't verify container is exited. %!v(MISSING): unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:06.540568  320967 cli_runner.go:164] Run: docker container inspect missing-upgrade-629154 --format={{.State.Status}}
	W0717 19:17:06.558719  320967 cli_runner.go:211] docker container inspect missing-upgrade-629154 --format={{.State.Status}} returned with exit code 1
	I0717 19:17:06.558782  320967 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:06.558811  320967 oci.go:661] temporary error: container missing-upgrade-629154 status is  but expect it to be exited
	I0717 19:17:06.558845  320967 retry.go:31] will retry after 1.175735401s: couldn't verify container is exited. %!v(MISSING): unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:07.735178  320967 cli_runner.go:164] Run: docker container inspect missing-upgrade-629154 --format={{.State.Status}}
	W0717 19:17:07.751601  320967 cli_runner.go:211] docker container inspect missing-upgrade-629154 --format={{.State.Status}} returned with exit code 1
	I0717 19:17:07.751672  320967 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:07.751688  320967 oci.go:661] temporary error: container missing-upgrade-629154 status is  but expect it to be exited
	I0717 19:17:07.751715  320967 retry.go:31] will retry after 1.488312422s: couldn't verify container is exited. %!v(MISSING): unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:07.897716  319694 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-677764 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 19:17:07.913972  319694 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0717 19:17:07.917570  319694 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:17:07.930480  319694 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 19:17:07.930556  319694 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:17:07.970335  319694 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0717 19:17:07.970401  319694 ssh_runner.go:195] Run: which lz4
	I0717 19:17:07.973785  319694 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 19:17:07.976894  319694 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:17:07.976930  319694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0717 19:17:08.876790  319694 crio.go:444] Took 0.903039 seconds to copy over tarball
	I0717 19:17:08.876869  319694 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:17:10.923864  319694 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.046958184s)
	I0717 19:17:10.923891  319694 crio.go:451] Took 2.047072 seconds to extract the tarball
	I0717 19:17:10.923900  319694 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:17:10.995654  319694 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:17:11.033449  319694 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 19:17:11.033470  319694 cache_images.go:84] Images are preloaded, skipping loading
	I0717 19:17:11.033536  319694 ssh_runner.go:195] Run: crio config
	I0717 19:17:11.076475  319694 cni.go:84] Creating CNI manager for ""
	I0717 19:17:11.076498  319694 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 19:17:11.076516  319694 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 19:17:11.076533  319694 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-677764 NodeName:kubernetes-upgrade-677764 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:17:11.076678  319694 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-677764"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:17:11.076739  319694 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-677764 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:kubernetes-upgrade-677764 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 19:17:11.076789  319694 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 19:17:11.085102  319694 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:17:11.085179  319694 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:17:11.093048  319694 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (435 bytes)
	I0717 19:17:11.108937  319694 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:17:11.124570  319694 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0717 19:17:11.140125  319694 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0717 19:17:11.143262  319694 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:17:11.153113  319694 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/kubernetes-upgrade-677764 for IP: 192.168.85.2
	I0717 19:17:11.153145  319694 certs.go:190] acquiring lock for shared ca certs: {Name:mk42196ce59710ebf500640671660e2f4656c84e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:17:11.153292  319694 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16890-138069/.minikube/ca.key
	I0717 19:17:11.153357  319694 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16890-138069/.minikube/proxy-client-ca.key
	I0717 19:17:11.153465  319694 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/kubernetes-upgrade-677764/client.key
	I0717 19:17:11.153534  319694 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/kubernetes-upgrade-677764/apiserver.key.43b9df8c
	I0717 19:17:11.153592  319694 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/kubernetes-upgrade-677764/proxy-client.key
	I0717 19:17:11.153723  319694 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/144822.pem (1338 bytes)
	W0717 19:17:11.153767  319694 certs.go:433] ignoring /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/144822_empty.pem, impossibly tiny 0 bytes
	I0717 19:17:11.153786  319694 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:17:11.153819  319694 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem (1078 bytes)
	I0717 19:17:11.153854  319694 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:17:11.153884  319694 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/key.pem (1675 bytes)
	I0717 19:17:11.153945  319694 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem (1708 bytes)
	I0717 19:17:11.154698  319694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/kubernetes-upgrade-677764/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 19:17:11.176452  319694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/kubernetes-upgrade-677764/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 19:17:11.197450  319694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/kubernetes-upgrade-677764/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:17:11.219208  319694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/kubernetes-upgrade-677764/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 19:17:11.240801  319694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:17:11.262498  319694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 19:17:11.284551  319694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:17:11.306440  319694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 19:17:11.328116  319694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/certs/144822.pem --> /usr/share/ca-certificates/144822.pem (1338 bytes)
	I0717 19:17:11.349097  319694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem --> /usr/share/ca-certificates/1448222.pem (1708 bytes)
	I0717 19:17:11.370305  319694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:17:11.391989  319694 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:17:11.409683  319694 ssh_runner.go:195] Run: openssl version
	I0717 19:17:11.414907  319694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144822.pem && ln -fs /usr/share/ca-certificates/144822.pem /etc/ssl/certs/144822.pem"
	I0717 19:17:11.423323  319694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144822.pem
	I0717 19:17:11.426616  319694 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:51 /usr/share/ca-certificates/144822.pem
	I0717 19:17:11.426664  319694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144822.pem
	I0717 19:17:11.433278  319694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144822.pem /etc/ssl/certs/51391683.0"
	I0717 19:17:11.441142  319694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1448222.pem && ln -fs /usr/share/ca-certificates/1448222.pem /etc/ssl/certs/1448222.pem"
	I0717 19:17:11.449686  319694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1448222.pem
	I0717 19:17:11.453061  319694 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:51 /usr/share/ca-certificates/1448222.pem
	I0717 19:17:11.453118  319694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1448222.pem
	I0717 19:17:11.459228  319694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1448222.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:17:11.466968  319694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:17:11.475233  319694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:17:11.478394  319694 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:46 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:17:11.478445  319694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:17:11.484541  319694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:17:11.492555  319694 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 19:17:11.495732  319694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:17:11.502135  319694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:17:11.508481  319694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:17:11.514577  319694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:17:11.520744  319694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:17:11.526774  319694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:17:11.533715  319694 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-677764 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:kubernetes-upgrade-677764 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:17:11.533824  319694 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:17:11.533871  319694 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:17:11.567652  319694 cri.go:89] found id: ""
	I0717 19:17:11.567724  319694 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:17:11.575934  319694 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 19:17:11.575958  319694 kubeadm.go:636] restartCluster start
	I0717 19:17:11.576033  319694 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:17:11.583557  319694 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:11.584390  319694 kubeconfig.go:135] verify returned: extract IP: "kubernetes-upgrade-677764" does not appear in /home/jenkins/minikube-integration/16890-138069/kubeconfig
	I0717 19:17:11.584810  319694 kubeconfig.go:146] "kubernetes-upgrade-677764" context is missing from /home/jenkins/minikube-integration/16890-138069/kubeconfig - will repair!
	I0717 19:17:11.585467  319694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-138069/kubeconfig: {Name:mkc53c034e0e90a78d013670a58d5882070a3e3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:17:11.586395  319694 kapi.go:59] client config for kubernetes-upgrade-677764: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/profiles/kubernetes-upgrade-677764/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/profiles/kubernetes-upgrade-677764/client.key", CAFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 19:17:11.587144  319694 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:17:11.595173  319694 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2023-07-17 19:16:29.700850359 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2023-07-17 19:17:11.135848458 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.85.2
	@@ -11,13 +11,13 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/crio/crio.sock
	+  criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-677764"
	   kubeletExtraArgs:
	     node-ip: 192.168.85.2
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	@@ -31,16 +31,14 @@
	   extraArgs:
	     leader-elect: "false"
	 certificatesDir: /var/lib/minikube/certs
	-clusterName: kubernetes-upgrade-677764
	+clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	-dns:
	-  type: CoreDNS
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	-      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.85.2:2381
	-kubernetesVersion: v1.16.0
	+      proxy-refresh-interval: "70000"
	+kubernetesVersion: v1.27.3
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I0717 19:17:11.595193  319694 kubeadm.go:1128] stopping kube-system containers ...
	I0717 19:17:11.595206  319694 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:17:11.595251  319694 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:17:11.629743  319694 cri.go:89] found id: ""
	I0717 19:17:11.629813  319694 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:17:11.640757  319694 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:17:11.648487  319694 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5703 Jul 17 19:16 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5743 Jul 17 19:16 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5823 Jul 17 19:16 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5687 Jul 17 19:16 /etc/kubernetes/scheduler.conf
	
	I0717 19:17:11.648545  319694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:17:11.656096  319694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:17:11.663849  319694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:17:11.671654  319694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:17:11.679410  319694 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:17:11.687153  319694 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 19:17:11.687174  319694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:17:11.735063  319694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:17:07.592253  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:07.592317  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:07.602874  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:08.091426  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:08.091520  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:08.103609  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:08.592219  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:08.592301  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:08.606487  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:09.092214  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:09.092291  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:09.102918  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:09.591408  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:09.591498  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:09.602029  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:10.091629  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:10.091723  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:10.102016  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:10.591554  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:10.591677  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:10.601759  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:11.092357  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:11.092435  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:11.101989  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:11.591428  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:11.591509  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:11.601562  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:12.092211  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:12.092316  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:12.102546  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:09.240615  320967 cli_runner.go:164] Run: docker container inspect missing-upgrade-629154 --format={{.State.Status}}
	W0717 19:17:09.260343  320967 cli_runner.go:211] docker container inspect missing-upgrade-629154 --format={{.State.Status}} returned with exit code 1
	I0717 19:17:09.260427  320967 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:09.260448  320967 oci.go:661] temporary error: container missing-upgrade-629154 status is  but expect it to be exited
	I0717 19:17:09.260524  320967 retry.go:31] will retry after 2.925283312s: couldn't verify container is exited. %!v(MISSING): unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:12.188659  320967 cli_runner.go:164] Run: docker container inspect missing-upgrade-629154 --format={{.State.Status}}
	W0717 19:17:12.204822  320967 cli_runner.go:211] docker container inspect missing-upgrade-629154 --format={{.State.Status}} returned with exit code 1
	I0717 19:17:12.204899  320967 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:12.204916  320967 oci.go:661] temporary error: container missing-upgrade-629154 status is  but expect it to be exited
	I0717 19:17:12.204941  320967 retry.go:31] will retry after 2.348489928s: couldn't verify container is exited. %!v(MISSING): unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:12.260088  319694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:17:12.384271  319694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:17:12.436569  319694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:17:12.562426  319694 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:17:12.562489  319694 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:17:13.073206  319694 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:17:13.573875  319694 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:17:13.585224  319694 api_server.go:72] duration metric: took 1.022794969s to wait for apiserver process to appear ...
	I0717 19:17:13.585251  319694 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:17:13.585273  319694 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0717 19:17:12.592341  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:12.592414  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:12.602282  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:13.091814  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:13.091898  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:13.102925  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:13.591410  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:13.591497  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:13.601902  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:14.092039  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:14.092143  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:14.102651  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:14.591810  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:14.591897  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:14.601989  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:15.091535  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:15.091626  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:15.101734  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:15.591299  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:15.591383  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:15.601527  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:16.092099  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:16.092203  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:16.102156  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:16.591684  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:16.591790  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:16.601816  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:17.082386  318888 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 19:17:17.082436  318888 kubeadm.go:1128] stopping kube-system containers ...
	I0717 19:17:17.082451  318888 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:17:17.082517  318888 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:17:17.118915  318888 cri.go:89] found id: "ab7184693b8535872a6449bd84279882db6966e0d108be297584389fcbd446cd"
	I0717 19:17:17.118943  318888 cri.go:89] found id: "3c4ecc96bf0b93221b1cd41bd5f1526256df3f33b13568ef0d2794d1eb83eca3"
	I0717 19:17:17.118951  318888 cri.go:89] found id: "d061dfba20f3f98ca0245fe20749ce94924338455dffdddfee06f3279e0c6ff8"
	I0717 19:17:17.118957  318888 cri.go:89] found id: "f6dec920e96cf327838fec0d74079f5434ac14535a8435b632e013e75fdb381f"
	I0717 19:17:17.118963  318888 cri.go:89] found id: "883de01fb9bb917f2b2c7d622bacd82c691a1416de9b63dd431f017be84c6eee"
	I0717 19:17:17.118969  318888 cri.go:89] found id: "e10be0e9af17f8987e24e478a481f2c0799874fb1457f4c96c792fa327c1e1fe"
	I0717 19:17:17.118976  318888 cri.go:89] found id: "6c2d784dbdd1891bc8ee8488658197313688eb7036abdb4530230dcf2eccb3fa"
	I0717 19:17:17.118981  318888 cri.go:89] found id: "249f2d6748858a003aca4fb5b25038d813f220e6af0a35223e06266835417555"
	I0717 19:17:17.118985  318888 cri.go:89] found id: ""
	I0717 19:17:17.118990  318888 cri.go:234] Stopping containers: [ab7184693b8535872a6449bd84279882db6966e0d108be297584389fcbd446cd 3c4ecc96bf0b93221b1cd41bd5f1526256df3f33b13568ef0d2794d1eb83eca3 d061dfba20f3f98ca0245fe20749ce94924338455dffdddfee06f3279e0c6ff8 f6dec920e96cf327838fec0d74079f5434ac14535a8435b632e013e75fdb381f 883de01fb9bb917f2b2c7d622bacd82c691a1416de9b63dd431f017be84c6eee e10be0e9af17f8987e24e478a481f2c0799874fb1457f4c96c792fa327c1e1fe 6c2d784dbdd1891bc8ee8488658197313688eb7036abdb4530230dcf2eccb3fa 249f2d6748858a003aca4fb5b25038d813f220e6af0a35223e06266835417555]
	I0717 19:17:17.119041  318888 ssh_runner.go:195] Run: which crictl
	I0717 19:17:17.122491  318888 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 ab7184693b8535872a6449bd84279882db6966e0d108be297584389fcbd446cd 3c4ecc96bf0b93221b1cd41bd5f1526256df3f33b13568ef0d2794d1eb83eca3 d061dfba20f3f98ca0245fe20749ce94924338455dffdddfee06f3279e0c6ff8 f6dec920e96cf327838fec0d74079f5434ac14535a8435b632e013e75fdb381f 883de01fb9bb917f2b2c7d622bacd82c691a1416de9b63dd431f017be84c6eee e10be0e9af17f8987e24e478a481f2c0799874fb1457f4c96c792fa327c1e1fe 6c2d784dbdd1891bc8ee8488658197313688eb7036abdb4530230dcf2eccb3fa 249f2d6748858a003aca4fb5b25038d813f220e6af0a35223e06266835417555
	I0717 19:17:14.554577  320967 cli_runner.go:164] Run: docker container inspect missing-upgrade-629154 --format={{.State.Status}}
	W0717 19:17:14.571476  320967 cli_runner.go:211] docker container inspect missing-upgrade-629154 --format={{.State.Status}} returned with exit code 1
	I0717 19:17:14.571547  320967 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:14.571564  320967 oci.go:661] temporary error: container missing-upgrade-629154 status is  but expect it to be exited
	I0717 19:17:14.571591  320967 retry.go:31] will retry after 3.344538832s: couldn't verify container is exited. %!v(MISSING): unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:17.916378  320967 cli_runner.go:164] Run: docker container inspect missing-upgrade-629154 --format={{.State.Status}}
	W0717 19:17:17.935179  320967 cli_runner.go:211] docker container inspect missing-upgrade-629154 --format={{.State.Status}} returned with exit code 1
	I0717 19:17:17.935270  320967 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:17.935292  320967 oci.go:661] temporary error: container missing-upgrade-629154 status is  but expect it to be exited
	I0717 19:17:17.935336  320967 oci.go:88] couldn't shut down missing-upgrade-629154 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	 
	I0717 19:17:17.935395  320967 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-629154
	I0717 19:17:17.953470  320967 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-629154
	W0717 19:17:17.973573  320967 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-629154 returned with exit code 1
	I0717 19:17:17.973680  320967 cli_runner.go:164] Run: docker network inspect  --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 19:17:17.992563  320967 cli_runner.go:211] docker network inspect  --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 19:17:17.992659  320967 network_create.go:281] running [docker network inspect ] to gather additional debugging logs...
	I0717 19:17:17.992690  320967 cli_runner.go:164] Run: docker network inspect 
	W0717 19:17:18.009786  320967 cli_runner.go:211] docker network inspect  returned with exit code 1
	I0717 19:17:18.009826  320967 network_create.go:284] error running [docker network inspect ]: docker network inspect : exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: 
	I0717 19:17:18.009839  320967 network_create.go:286] output of [docker network inspect ]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: 
	
	** /stderr **
	I0717 19:17:18.010035  320967 fix.go:114] Sleeping 1 second for extra luck!
	I0717 19:17:19.010169  320967 start.go:125] createHost starting for "m01" (driver="docker")
	I0717 19:17:19.012954  320967 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0717 19:17:19.013154  320967 start.go:159] libmachine.API.Create for "missing-upgrade-629154" (driver="docker")
	I0717 19:17:19.013189  320967 client.go:168] LocalClient.Create starting
	I0717 19:17:19.013296  320967 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem
	I0717 19:17:19.013340  320967 main.go:141] libmachine: Decoding PEM data...
	I0717 19:17:19.013361  320967 main.go:141] libmachine: Parsing certificate...
	I0717 19:17:19.013432  320967 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem
	I0717 19:17:19.013454  320967 main.go:141] libmachine: Decoding PEM data...
	I0717 19:17:19.013468  320967 main.go:141] libmachine: Parsing certificate...
	I0717 19:17:19.014386  320967 cli_runner.go:164] Run: docker network inspect missing-upgrade-629154 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 19:17:19.031228  320967 cli_runner.go:211] docker network inspect missing-upgrade-629154 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 19:17:19.031307  320967 network_create.go:281] running [docker network inspect missing-upgrade-629154] to gather additional debugging logs...
	I0717 19:17:19.031327  320967 cli_runner.go:164] Run: docker network inspect missing-upgrade-629154
	W0717 19:17:19.047202  320967 cli_runner.go:211] docker network inspect missing-upgrade-629154 returned with exit code 1
	I0717 19:17:19.047245  320967 network_create.go:284] error running [docker network inspect missing-upgrade-629154]: docker network inspect missing-upgrade-629154: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-629154 not found
	I0717 19:17:19.047260  320967 network_create.go:286] output of [docker network inspect missing-upgrade-629154]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-629154 not found
	
	** /stderr **
	I0717 19:17:19.047324  320967 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 19:17:19.064674  320967 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1070ebc8dfdf IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:9e:80:fb:8c} reservation:<nil>}
	I0717 19:17:19.065491  320967 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-743d16d82889 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:b6:c0:17:7b} reservation:<nil>}
	I0717 19:17:19.066074  320967 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-61bb7c620e40 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:de:71:25:d5} reservation:<nil>}
	I0717 19:17:19.066872  320967 network.go:214] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-daa1021b57a1 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:ac:ec:89:33} reservation:<nil>}
	I0717 19:17:19.067732  320967 network.go:214] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-75d7f2c6b3bf IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:07:a6:ac:63} reservation:<nil>}
	I0717 19:17:19.068803  320967 network.go:209] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00140ab20}
	I0717 19:17:19.068835  320967 network_create.go:123] attempt to create docker network missing-upgrade-629154 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I0717 19:17:19.068903  320967 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-629154 missing-upgrade-629154
	I0717 19:17:19.127380  320967 network_create.go:107] docker network missing-upgrade-629154 192.168.94.0/24 created
	I0717 19:17:19.127417  320967 kic.go:117] calculated static IP "192.168.94.2" for the "missing-upgrade-629154" container
	I0717 19:17:19.127480  320967 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 19:17:18.586971  319694 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 19:17:19.087815  319694 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0717 19:17:17.531585  318888 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:17:17.626924  318888 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:17:17.636069  318888 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jul 17 19:15 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jul 17 19:16 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Jul 17 19:16 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jul 17 19:16 /etc/kubernetes/scheduler.conf
	
	I0717 19:17:17.636156  318888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:17:17.644854  318888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:17:17.653576  318888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:17:17.662019  318888 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:17.662095  318888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:17:17.670391  318888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:17:17.679253  318888 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:17.679334  318888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:17:17.687631  318888 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:17:17.696369  318888 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 19:17:17.696393  318888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:17:17.748307  318888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:17:18.632331  318888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:17:18.797010  318888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:17:18.853832  318888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:17:18.986878  318888 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:17:18.986960  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:17:19.498103  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:17:19.997578  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:17:20.497612  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:17:20.509814  318888 api_server.go:72] duration metric: took 1.522935408s to wait for apiserver process to appear ...
	I0717 19:17:20.509839  318888 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:17:20.509859  318888 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0717 19:17:22.411960  318888 api_server.go:279] https://192.168.67.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:17:22.412022  318888 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:17:22.912688  318888 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0717 19:17:22.918471  318888 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:17:22.918506  318888 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:17:23.413158  318888 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0717 19:17:23.418644  318888 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:17:23.418672  318888 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:17:23.912182  318888 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0717 19:17:23.917834  318888 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0717 19:17:23.926485  318888 api_server.go:141] control plane version: v1.27.3
	I0717 19:17:23.926517  318888 api_server.go:131] duration metric: took 3.416671828s to wait for apiserver health ...
	I0717 19:17:23.926528  318888 cni.go:84] Creating CNI manager for ""
	I0717 19:17:23.926537  318888 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 19:17:23.929204  318888 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 19:17:19.144073  320967 cli_runner.go:164] Run: docker volume create missing-upgrade-629154 --label name.minikube.sigs.k8s.io=missing-upgrade-629154 --label created_by.minikube.sigs.k8s.io=true
	I0717 19:17:19.160133  320967 oci.go:103] Successfully created a docker volume missing-upgrade-629154
	I0717 19:17:19.160242  320967 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-629154-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-629154 --entrypoint /usr/bin/test -v missing-upgrade-629154:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0717 19:17:22.070001  320967 cli_runner.go:217] Completed: docker run --rm --name missing-upgrade-629154-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-629154 --entrypoint /usr/bin/test -v missing-upgrade-629154:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib: (2.909707016s)
	I0717 19:17:22.070032  320967 oci.go:107] Successfully prepared a docker volume missing-upgrade-629154
	I0717 19:17:22.070049  320967 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	W0717 19:17:22.070172  320967 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0717 19:17:22.070264  320967 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 19:17:22.129147  320967 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-629154 --name missing-upgrade-629154 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-629154 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-629154 --network missing-upgrade-629154 --ip 192.168.94.2 --volume missing-upgrade-629154:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 19:17:22.460589  320967 cli_runner.go:164] Run: docker container inspect missing-upgrade-629154 --format={{.State.Running}}
	I0717 19:17:22.489885  320967 cli_runner.go:164] Run: docker container inspect missing-upgrade-629154 --format={{.State.Status}}
	I0717 19:17:22.515709  320967 cli_runner.go:164] Run: docker exec missing-upgrade-629154 stat /var/lib/dpkg/alternatives/iptables
	I0717 19:17:22.590040  320967 oci.go:144] the created container "missing-upgrade-629154" has a running status.
	I0717 19:17:22.590075  320967 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16890-138069/.minikube/machines/missing-upgrade-629154/id_rsa...
	I0717 19:17:22.751433  320967 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16890-138069/.minikube/machines/missing-upgrade-629154/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 19:17:22.773136  320967 cli_runner.go:164] Run: docker container inspect missing-upgrade-629154 --format={{.State.Status}}
	I0717 19:17:22.792719  320967 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 19:17:22.792750  320967 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-629154 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 19:17:22.865866  320967 cli_runner.go:164] Run: docker container inspect missing-upgrade-629154 --format={{.State.Status}}
	I0717 19:17:22.887452  320967 machine.go:88] provisioning docker machine ...
	I0717 19:17:22.887490  320967 ubuntu.go:169] provisioning hostname "missing-upgrade-629154"
	I0717 19:17:22.887568  320967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-629154
	I0717 19:17:22.905358  320967 main.go:141] libmachine: Using SSH client type: native
	I0717 19:17:22.905977  320967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32979 <nil> <nil>}
	I0717 19:17:22.905994  320967 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-629154 && echo "missing-upgrade-629154" | sudo tee /etc/hostname
	I0717 19:17:22.906764  320967 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45632->127.0.0.1:32979: read: connection reset by peer
	I0717 19:17:24.088536  319694 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 19:17:24.088584  319694 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0717 19:17:23.930742  318888 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 19:17:23.934475  318888 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0717 19:17:23.934493  318888 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 19:17:23.950602  318888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 19:17:24.599328  318888 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:17:24.606223  318888 system_pods.go:59] 7 kube-system pods found
	I0717 19:17:24.606260  318888 system_pods.go:61] "coredns-5d78c9869d-7bhk2" [113dbc11-1279-4188-b57f-ef1a7476354e] Running
	I0717 19:17:24.606270  318888 system_pods.go:61] "etcd-pause-795576" [cb60766e-050b-459f-ab27-b4eb96c1cfb1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:17:24.606283  318888 system_pods.go:61] "kindnet-blwth" [7367b120-9ad2-48ef-a098-f9427cd70ce7] Running
	I0717 19:17:24.606295  318888 system_pods.go:61] "kube-apiserver-pause-795576" [deacff2a-f4f5-4573-985b-f50aec648951] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:17:24.606305  318888 system_pods.go:61] "kube-controller-manager-pause-795576" [7fe105ea-5ec8-4082-8c94-109c5613c844] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:17:24.606312  318888 system_pods.go:61] "kube-proxy-vcv28" [543aec10-6af6-4088-941a-d684da877b3f] Running
	I0717 19:17:24.606330  318888 system_pods.go:61] "kube-scheduler-pause-795576" [282169f5-c63d-4d71-9dd5-180ca707ac61] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:17:24.606337  318888 system_pods.go:74] duration metric: took 6.98622ms to wait for pod list to return data ...
	I0717 19:17:24.606346  318888 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:17:24.609591  318888 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0717 19:17:24.609618  318888 node_conditions.go:123] node cpu capacity is 8
	I0717 19:17:24.609627  318888 node_conditions.go:105] duration metric: took 3.276797ms to run NodePressure ...
	I0717 19:17:24.609647  318888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:17:24.829854  318888 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 19:17:24.834970  318888 kubeadm.go:787] kubelet initialised
	I0717 19:17:24.834992  318888 kubeadm.go:788] duration metric: took 5.114607ms waiting for restarted kubelet to initialise ...
	I0717 19:17:24.835001  318888 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:17:24.840370  318888 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-7bhk2" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:24.845574  318888 pod_ready.go:92] pod "coredns-5d78c9869d-7bhk2" in "kube-system" namespace has status "Ready":"True"
	I0717 19:17:24.845597  318888 pod_ready.go:81] duration metric: took 5.201567ms waiting for pod "coredns-5d78c9869d-7bhk2" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:24.845608  318888 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:26.856893  318888 pod_ready.go:102] pod "etcd-pause-795576" in "kube-system" namespace has status "Ready":"False"
	I0717 19:17:26.047863  320967 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-629154
	
	I0717 19:17:26.048003  320967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-629154
	I0717 19:17:26.066175  320967 main.go:141] libmachine: Using SSH client type: native
	I0717 19:17:26.066619  320967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32979 <nil> <nil>}
	I0717 19:17:26.066642  320967 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-629154' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-629154/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-629154' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:17:26.192305  320967 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:17:26.192337  320967 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16890-138069/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-138069/.minikube}
	I0717 19:17:26.192357  320967 ubuntu.go:177] setting up certificates
	I0717 19:17:26.192366  320967 provision.go:83] configureAuth start
	I0717 19:17:26.192418  320967 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-629154
	I0717 19:17:26.209346  320967 provision.go:138] copyHostCerts
	I0717 19:17:26.209408  320967 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-138069/.minikube/ca.pem, removing ...
	I0717 19:17:26.209416  320967 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-138069/.minikube/ca.pem
	I0717 19:17:26.209481  320967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-138069/.minikube/ca.pem (1078 bytes)
	I0717 19:17:26.209565  320967 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-138069/.minikube/cert.pem, removing ...
	I0717 19:17:26.209573  320967 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-138069/.minikube/cert.pem
	I0717 19:17:26.209595  320967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-138069/.minikube/cert.pem (1123 bytes)
	I0717 19:17:26.209653  320967 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-138069/.minikube/key.pem, removing ...
	I0717 19:17:26.209661  320967 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-138069/.minikube/key.pem
	I0717 19:17:26.209682  320967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-138069/.minikube/key.pem (1675 bytes)
	I0717 19:17:26.209729  320967 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-629154 san=[192.168.94.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-629154]
	I0717 19:17:26.391286  320967 provision.go:172] copyRemoteCerts
	I0717 19:17:26.391347  320967 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:17:26.391387  320967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-629154
	I0717 19:17:26.409619  320967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32979 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/missing-upgrade-629154/id_rsa Username:docker}
	I0717 19:17:26.501111  320967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:17:26.524306  320967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 19:17:26.547309  320967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:17:26.569596  320967 provision.go:86] duration metric: configureAuth took 377.215595ms
	I0717 19:17:26.569626  320967 ubuntu.go:193] setting minikube options for container-runtime
	I0717 19:17:26.569809  320967 config.go:182] Loaded profile config "missing-upgrade-629154": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0717 19:17:26.569915  320967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-629154
	I0717 19:17:26.587252  320967 main.go:141] libmachine: Using SSH client type: native
	I0717 19:17:26.587695  320967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32979 <nil> <nil>}
	I0717 19:17:26.587716  320967 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:17:27.016114  320967 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:17:27.016149  320967 machine.go:91] provisioned docker machine in 4.128673908s
	I0717 19:17:27.016159  320967 client.go:171] LocalClient.Create took 8.002964436s
	I0717 19:17:27.016178  320967 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-629154" took 8.00302511s
	I0717 19:17:27.016187  320967 start.go:300] post-start starting for "missing-upgrade-629154" (driver="docker")
	I0717 19:17:27.016200  320967 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:17:27.016260  320967 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:17:27.016297  320967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-629154
	I0717 19:17:27.033706  320967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32979 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/missing-upgrade-629154/id_rsa Username:docker}
	I0717 19:17:27.125336  320967 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:17:27.128735  320967 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 19:17:27.128773  320967 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 19:17:27.128787  320967 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 19:17:27.128796  320967 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 19:17:27.128808  320967 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-138069/.minikube/addons for local assets ...
	I0717 19:17:27.128868  320967 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-138069/.minikube/files for local assets ...
	I0717 19:17:27.128976  320967 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem -> 1448222.pem in /etc/ssl/certs
	I0717 19:17:27.129095  320967 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:17:27.137171  320967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem --> /etc/ssl/certs/1448222.pem (1708 bytes)
	I0717 19:17:27.159571  320967 start.go:303] post-start completed in 143.365725ms
	I0717 19:17:27.159943  320967 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-629154
	I0717 19:17:27.177373  320967 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/missing-upgrade-629154/config.json ...
	I0717 19:17:27.177620  320967 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 19:17:27.177667  320967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-629154
	I0717 19:17:27.194051  320967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32979 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/missing-upgrade-629154/id_rsa Username:docker}
	I0717 19:17:27.281252  320967 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 19:17:27.285719  320967 start.go:128] duration metric: createHost completed in 8.275511714s
	I0717 19:17:27.285823  320967 cli_runner.go:164] Run: docker container inspect missing-upgrade-629154 --format={{.State.Status}}
	W0717 19:17:27.304721  320967 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 19:17:27.304759  320967 machine.go:88] provisioning docker machine ...
	I0717 19:17:27.304795  320967 ubuntu.go:169] provisioning hostname "missing-upgrade-629154"
	I0717 19:17:27.304854  320967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-629154
	I0717 19:17:27.323303  320967 main.go:141] libmachine: Using SSH client type: native
	I0717 19:17:27.323762  320967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32979 <nil> <nil>}
	I0717 19:17:27.323780  320967 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-629154 && echo "missing-upgrade-629154" | sudo tee /etc/hostname
	I0717 19:17:27.463320  320967 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-629154
	
	I0717 19:17:27.463427  320967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-629154
	I0717 19:17:27.480910  320967 main.go:141] libmachine: Using SSH client type: native
	I0717 19:17:27.481322  320967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32979 <nil> <nil>}
	I0717 19:17:27.481340  320967 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-629154' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-629154/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-629154' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:17:27.608434  320967 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:17:27.608471  320967 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16890-138069/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-138069/.minikube}
	I0717 19:17:27.608507  320967 ubuntu.go:177] setting up certificates
	I0717 19:17:27.608519  320967 provision.go:83] configureAuth start
	I0717 19:17:27.608583  320967 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-629154
	I0717 19:17:27.626713  320967 provision.go:138] copyHostCerts
	I0717 19:17:27.626805  320967 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-138069/.minikube/ca.pem, removing ...
	I0717 19:17:27.626822  320967 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-138069/.minikube/ca.pem
	I0717 19:17:27.626895  320967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-138069/.minikube/ca.pem (1078 bytes)
	I0717 19:17:27.627011  320967 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-138069/.minikube/cert.pem, removing ...
	I0717 19:17:27.627024  320967 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-138069/.minikube/cert.pem
	I0717 19:17:27.627053  320967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-138069/.minikube/cert.pem (1123 bytes)
	I0717 19:17:27.627124  320967 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-138069/.minikube/key.pem, removing ...
	I0717 19:17:27.627135  320967 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-138069/.minikube/key.pem
	I0717 19:17:27.627160  320967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-138069/.minikube/key.pem (1675 bytes)
	I0717 19:17:27.627236  320967 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-629154 san=[192.168.94.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-629154]
	I0717 19:17:27.714471  320967 provision.go:172] copyRemoteCerts
	I0717 19:17:27.714534  320967 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:17:27.714586  320967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-629154
	I0717 19:17:27.732023  320967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32979 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/missing-upgrade-629154/id_rsa Username:docker}
	I0717 19:17:27.829082  320967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:17:27.851344  320967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 19:17:27.874597  320967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:17:27.897224  320967 provision.go:86] duration metric: configureAuth took 288.686927ms
	I0717 19:17:27.897251  320967 ubuntu.go:193] setting minikube options for container-runtime
	I0717 19:17:27.897418  320967 config.go:182] Loaded profile config "missing-upgrade-629154": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0717 19:17:27.897513  320967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-629154
	I0717 19:17:27.914517  320967 main.go:141] libmachine: Using SSH client type: native
	I0717 19:17:27.914920  320967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32979 <nil> <nil>}
	I0717 19:17:27.914937  320967 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:17:28.171362  320967 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:17:28.171397  320967 machine.go:91] provisioned docker machine in 866.624028ms
	I0717 19:17:28.171410  320967 start.go:300] post-start starting for "missing-upgrade-629154" (driver="docker")
	I0717 19:17:28.171423  320967 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:17:28.171484  320967 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:17:28.171529  320967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-629154
	I0717 19:17:28.188632  320967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32979 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/missing-upgrade-629154/id_rsa Username:docker}
	I0717 19:17:28.281181  320967 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:17:28.284431  320967 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 19:17:28.284485  320967 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 19:17:28.284497  320967 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 19:17:28.284508  320967 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 19:17:28.284520  320967 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-138069/.minikube/addons for local assets ...
	I0717 19:17:28.284586  320967 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-138069/.minikube/files for local assets ...
	I0717 19:17:28.284668  320967 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem -> 1448222.pem in /etc/ssl/certs
	I0717 19:17:28.284769  320967 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:17:28.293058  320967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem --> /etc/ssl/certs/1448222.pem (1708 bytes)
	I0717 19:17:28.315290  320967 start.go:303] post-start completed in 143.863909ms
	I0717 19:17:28.315362  320967 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 19:17:28.315405  320967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-629154
	I0717 19:17:28.332422  320967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32979 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/missing-upgrade-629154/id_rsa Username:docker}
	I0717 19:17:28.420734  320967 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 19:17:28.424810  320967 fix.go:56] fixHost completed within 24.070138706s
	I0717 19:17:28.424837  320967 start.go:83] releasing machines lock for "missing-upgrade-629154", held for 24.070192183s
	I0717 19:17:28.424924  320967 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-629154
	I0717 19:17:28.441058  320967 ssh_runner.go:195] Run: cat /version.json
	I0717 19:17:28.441107  320967 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:17:28.441131  320967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-629154
	I0717 19:17:28.441171  320967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-629154
	I0717 19:17:28.458194  320967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32979 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/missing-upgrade-629154/id_rsa Username:docker}
	I0717 19:17:28.459501  320967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32979 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/missing-upgrade-629154/id_rsa Username:docker}
	W0717 19:17:28.636159  320967 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 19:17:28.636258  320967 ssh_runner.go:195] Run: systemctl --version
	I0717 19:17:28.640703  320967 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:17:28.778708  320967 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 19:17:28.783280  320967 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:17:28.801456  320967 cni.go:227] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0717 19:17:28.801540  320967 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:17:28.829717  320967 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0717 19:17:28.829747  320967 start.go:469] detecting cgroup driver to use...
	I0717 19:17:28.829786  320967 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 19:17:28.829837  320967 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:17:28.843756  320967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:17:28.854761  320967 docker.go:196] disabling cri-docker service (if available) ...
	I0717 19:17:28.854811  320967 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:17:28.867747  320967 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:17:28.881473  320967 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:17:28.956358  320967 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:17:29.033396  320967 docker.go:212] disabling docker service ...
	I0717 19:17:29.033460  320967 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:17:29.051767  320967 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:17:29.062851  320967 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:17:29.146285  320967 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:17:29.229241  320967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:17:29.239798  320967 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:17:29.255456  320967 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 19:17:29.255522  320967 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:29.264555  320967 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:17:29.264627  320967 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:29.274104  320967 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:29.283049  320967 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:29.291836  320967 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:17:29.300285  320967 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:17:29.308030  320967 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:17:29.315680  320967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:17:29.388239  320967 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:17:29.489929  320967 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:17:29.489996  320967 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:17:29.493556  320967 start.go:537] Will wait 60s for crictl version
	I0717 19:17:29.493622  320967 ssh_runner.go:195] Run: which crictl
	I0717 19:17:29.497005  320967 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:17:29.531465  320967 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0717 19:17:29.531554  320967 ssh_runner.go:195] Run: crio --version
	I0717 19:17:29.566836  320967 ssh_runner.go:195] Run: crio --version
	I0717 19:17:29.603453  320967 out.go:177] * Preparing Kubernetes v1.18.0 on CRI-O 1.24.6 ...
	I0717 19:17:29.605137  320967 cli_runner.go:164] Run: docker network inspect missing-upgrade-629154 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 19:17:29.621320  320967 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0717 19:17:29.625049  320967 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:17:29.638463  320967 out.go:177]   - kubeadm.pod-network-cidr=10.244.0.0/16
	I0717 19:17:29.089493  319694 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 19:17:29.089536  319694 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0717 19:17:28.856999  318888 pod_ready.go:102] pod "etcd-pause-795576" in "kube-system" namespace has status "Ready":"False"
	I0717 19:17:31.355930  318888 pod_ready.go:92] pod "etcd-pause-795576" in "kube-system" namespace has status "Ready":"True"
	I0717 19:17:31.355952  318888 pod_ready.go:81] duration metric: took 6.510338235s waiting for pod "etcd-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:31.355965  318888 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:29.640069  320967 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0717 19:17:29.640135  320967 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:17:29.677081  320967 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.0". assuming images are not preloaded.
	I0717 19:17:29.677103  320967 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.0 registry.k8s.io/kube-controller-manager:v1.18.0 registry.k8s.io/kube-scheduler:v1.18.0 registry.k8s.io/kube-proxy:v1.18.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 19:17:29.677196  320967 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:17:29.677219  320967 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.0
	I0717 19:17:29.677228  320967 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.0
	I0717 19:17:29.677234  320967 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0717 19:17:29.677249  320967 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0717 19:17:29.677198  320967 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.0
	I0717 19:17:29.677335  320967 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 19:17:29.677198  320967 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.0
	I0717 19:17:29.678391  320967 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.0
	I0717 19:17:29.678402  320967 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0717 19:17:29.678453  320967 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0717 19:17:29.678466  320967 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.0
	I0717 19:17:29.678397  320967 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.0
	I0717 19:17:29.678500  320967 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:17:29.678399  320967 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.0
	I0717 19:17:29.678706  320967 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 19:17:29.824846  320967 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.0
	I0717 19:17:29.847646  320967 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0717 19:17:29.851422  320967 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0717 19:17:29.853167  320967 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.0
	I0717 19:17:29.853500  320967 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.0
	I0717 19:17:29.855295  320967 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.0
	I0717 19:17:29.864298  320967 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.0" does not exist at hash "74060cea7f70476f300d9f04fe2c3b3a2e84589e0579382a8df8c82161c3735c" in container runtime
	I0717 19:17:29.864374  320967 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.0
	I0717 19:17:29.864425  320967 ssh_runner.go:195] Run: which crictl
	I0717 19:17:29.868223  320967 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0717 19:17:29.957661  320967 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:17:29.973824  320967 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0717 19:17:29.973876  320967 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0717 19:17:29.973926  320967 ssh_runner.go:195] Run: which crictl
	I0717 19:17:29.982131  320967 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0717 19:17:29.982180  320967 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0717 19:17:29.982222  320967 ssh_runner.go:195] Run: which crictl
	I0717 19:17:30.020947  320967 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.0" needs transfer: "registry.k8s.io/kube-proxy:v1.18.0" does not exist at hash "43940c34f24f39bc9a00b4f9dbcab51a3b28952a7c392c119b877fcb48fe65a3" in container runtime
	I0717 19:17:30.020984  320967 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.0
	I0717 19:17:30.021031  320967 ssh_runner.go:195] Run: which crictl
	I0717 19:17:30.021036  320967 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.0" does not exist at hash "d3e55153f52fb62421dae9ad1a8690a3fd1b30f1b808e50a69a8e7ed5565e72e" in container runtime
	I0717 19:17:30.021080  320967 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.0
	I0717 19:17:30.021121  320967 ssh_runner.go:195] Run: which crictl
	I0717 19:17:30.021137  320967 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.0" does not exist at hash "a31f78c7c8ce146a60cc178c528dd08ca89320f2883e7eb804d7f7b062ed6466" in container runtime
	I0717 19:17:30.021169  320967 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.0
	I0717 19:17:30.021200  320967 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.0
	I0717 19:17:30.021235  320967 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0717 19:17:30.021206  320967 ssh_runner.go:195] Run: which crictl
	I0717 19:17:30.021262  320967 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0717 19:17:30.021294  320967 ssh_runner.go:195] Run: which crictl
	I0717 19:17:30.021311  320967 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0717 19:17:30.021348  320967 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:17:30.021386  320967 ssh_runner.go:195] Run: which crictl
	I0717 19:17:30.024950  320967 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0717 19:17:30.024983  320967 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.0
	I0717 19:17:30.025034  320967 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0717 19:17:30.063920  320967 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.0
	I0717 19:17:30.063920  320967 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.0
	I0717 19:17:30.173386  320967 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0
	I0717 19:17:30.173477  320967 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0717 19:17:30.173555  320967 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.18.0
	I0717 19:17:30.173614  320967 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:17:30.175927  320967 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0
	I0717 19:17:30.176051  320967 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.18.0
	I0717 19:17:30.184620  320967 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0717 19:17:30.184692  320967 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0
	I0717 19:17:30.184748  320967 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_1.6.7
	I0717 19:17:30.184777  320967 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.18.0
	I0717 19:17:30.184620  320967 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0717 19:17:30.184844  320967 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.4.3-0
	I0717 19:17:30.186917  320967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 --> /var/lib/minikube/images/kube-apiserver_v1.18.0 (51090432 bytes)
	I0717 19:17:30.187020  320967 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0
	I0717 19:17:30.187127  320967 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.18.0
	I0717 19:17:30.300082  320967 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0717 19:17:30.300102  320967 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 19:17:30.300198  320967 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0717 19:17:30.300204  320967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 --> /var/lib/minikube/images/kube-proxy_v1.18.0 (48857088 bytes)
	I0717 19:17:30.300269  320967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 --> /var/lib/minikube/images/coredns_1.6.7 (13600256 bytes)
	I0717 19:17:30.300287  320967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 --> /var/lib/minikube/images/kube-controller-manager_v1.18.0 (49124864 bytes)
	I0717 19:17:30.300198  320967 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.2
	I0717 19:17:30.300356  320967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 --> /var/lib/minikube/images/etcd_3.4.3-0 (100950016 bytes)
	I0717 19:17:30.300408  320967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 --> /var/lib/minikube/images/kube-scheduler_v1.18.0 (34077696 bytes)
	I0717 19:17:30.396695  320967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 --> /var/lib/minikube/images/pause_3.2 (301056 bytes)
	I0717 19:17:30.396727  320967 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0717 19:17:30.396762  320967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0717 19:17:30.487685  320967 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.2
	I0717 19:17:30.487888  320967 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.2
	I0717 19:17:30.681924  320967 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 from cache
	I0717 19:17:30.681962  320967 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 19:17:30.682013  320967 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0717 19:17:31.467239  320967 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 19:17:31.467283  320967 crio.go:257] Loading image: /var/lib/minikube/images/coredns_1.6.7
	I0717 19:17:31.467328  320967 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_1.6.7
	I0717 19:17:31.804792  320967 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 from cache
	I0717 19:17:31.804838  320967 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.18.0
	I0717 19:17:31.804896  320967 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.18.0
	I0717 19:17:32.946234  320967 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.18.0: (1.141304861s)
	I0717 19:17:32.946265  320967 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 from cache
	I0717 19:17:32.946288  320967 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.18.0
	I0717 19:17:32.946327  320967 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.18.0
	I0717 19:17:33.368050  318888 pod_ready.go:102] pod "kube-apiserver-pause-795576" in "kube-system" namespace has status "Ready":"False"
	I0717 19:17:34.867118  318888 pod_ready.go:92] pod "kube-apiserver-pause-795576" in "kube-system" namespace has status "Ready":"True"
	I0717 19:17:34.867141  318888 pod_ready.go:81] duration metric: took 3.511170042s waiting for pod "kube-apiserver-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:34.867154  318888 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:34.872200  318888 pod_ready.go:92] pod "kube-controller-manager-pause-795576" in "kube-system" namespace has status "Ready":"True"
	I0717 19:17:34.872222  318888 pod_ready.go:81] duration metric: took 5.061874ms waiting for pod "kube-controller-manager-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:34.872234  318888 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vcv28" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:34.876974  318888 pod_ready.go:92] pod "kube-proxy-vcv28" in "kube-system" namespace has status "Ready":"True"
	I0717 19:17:34.876994  318888 pod_ready.go:81] duration metric: took 4.75416ms waiting for pod "kube-proxy-vcv28" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:34.877002  318888 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:34.882008  318888 pod_ready.go:92] pod "kube-scheduler-pause-795576" in "kube-system" namespace has status "Ready":"True"
	I0717 19:17:34.882025  318888 pod_ready.go:81] duration metric: took 5.017488ms waiting for pod "kube-scheduler-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:34.882031  318888 pod_ready.go:38] duration metric: took 10.047022086s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:17:34.882048  318888 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 19:17:34.889459  318888 ops.go:34] apiserver oom_adj: -16
	I0717 19:17:34.889481  318888 kubeadm.go:640] restartCluster took 27.829897508s
	I0717 19:17:34.889489  318888 kubeadm.go:406] StartCluster complete in 27.912159818s
	I0717 19:17:34.889507  318888 settings.go:142] acquiring lock: {Name:mk9765434b8f4871dd605367f6caa71617d51b6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:17:34.889566  318888 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16890-138069/kubeconfig
	I0717 19:17:34.890985  318888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-138069/kubeconfig: {Name:mkc53c034e0e90a78d013670a58d5882070a3e3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:17:34.891218  318888 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 19:17:34.891367  318888 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0717 19:17:34.893621  318888 out.go:177] * Enabled addons: 
	I0717 19:17:34.891570  318888 config.go:182] Loaded profile config "pause-795576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:17:34.892386  318888 kapi.go:59] client config for pause-795576: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/profiles/pause-795576/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/profiles/pause-795576/client.key", CAFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 19:17:34.895686  318888 addons.go:502] enable addons completed in 4.319254ms: enabled=[]
	I0717 19:17:34.900065  318888 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-795576" context rescaled to 1 replicas
	I0717 19:17:34.900105  318888 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:17:34.901715  318888 out.go:177] * Verifying Kubernetes components...
	I0717 19:17:33.801110  319694 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:50572->192.168.85.2:8443: read: connection reset by peer
	I0717 19:17:33.801167  319694 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0717 19:17:33.801586  319694 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0717 19:17:34.088017  319694 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0717 19:17:34.088418  319694 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0717 19:17:34.588032  319694 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0717 19:17:34.588510  319694 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0717 19:17:35.087132  319694 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0717 19:17:34.903227  318888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:17:34.975028  318888 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0717 19:17:34.975040  318888 node_ready.go:35] waiting up to 6m0s for node "pause-795576" to be "Ready" ...
	I0717 19:17:34.977527  318888 node_ready.go:49] node "pause-795576" has status "Ready":"True"
	I0717 19:17:34.977547  318888 node_ready.go:38] duration metric: took 2.489317ms waiting for node "pause-795576" to be "Ready" ...
	I0717 19:17:34.977557  318888 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:17:34.982915  318888 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-7bhk2" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:35.264001  318888 pod_ready.go:92] pod "coredns-5d78c9869d-7bhk2" in "kube-system" namespace has status "Ready":"True"
	I0717 19:17:35.264029  318888 pod_ready.go:81] duration metric: took 281.084061ms waiting for pod "coredns-5d78c9869d-7bhk2" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:35.264039  318888 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:35.664667  318888 pod_ready.go:92] pod "etcd-pause-795576" in "kube-system" namespace has status "Ready":"True"
	I0717 19:17:35.664695  318888 pod_ready.go:81] duration metric: took 400.647826ms waiting for pod "etcd-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:35.664711  318888 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:36.064629  318888 pod_ready.go:92] pod "kube-apiserver-pause-795576" in "kube-system" namespace has status "Ready":"True"
	I0717 19:17:36.064655  318888 pod_ready.go:81] duration metric: took 399.935907ms waiting for pod "kube-apiserver-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:36.064666  318888 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:36.464603  318888 pod_ready.go:92] pod "kube-controller-manager-pause-795576" in "kube-system" namespace has status "Ready":"True"
	I0717 19:17:36.464628  318888 pod_ready.go:81] duration metric: took 399.955789ms waiting for pod "kube-controller-manager-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:36.464638  318888 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vcv28" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:36.864714  318888 pod_ready.go:92] pod "kube-proxy-vcv28" in "kube-system" namespace has status "Ready":"True"
	I0717 19:17:36.864736  318888 pod_ready.go:81] duration metric: took 400.092782ms waiting for pod "kube-proxy-vcv28" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:36.864745  318888 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:37.264940  318888 pod_ready.go:92] pod "kube-scheduler-pause-795576" in "kube-system" namespace has status "Ready":"True"
	I0717 19:17:37.264967  318888 pod_ready.go:81] duration metric: took 400.214774ms waiting for pod "kube-scheduler-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:37.264981  318888 pod_ready.go:38] duration metric: took 2.287410265s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:17:37.265001  318888 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:17:37.265055  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:17:37.276679  318888 api_server.go:72] duration metric: took 2.376534107s to wait for apiserver process to appear ...
	I0717 19:17:37.276709  318888 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:17:37.276726  318888 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0717 19:17:37.281249  318888 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0717 19:17:37.282295  318888 api_server.go:141] control plane version: v1.27.3
	I0717 19:17:37.282319  318888 api_server.go:131] duration metric: took 5.603456ms to wait for apiserver health ...
	I0717 19:17:37.282329  318888 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:17:37.467541  318888 system_pods.go:59] 7 kube-system pods found
	I0717 19:17:37.467573  318888 system_pods.go:61] "coredns-5d78c9869d-7bhk2" [113dbc11-1279-4188-b57f-ef1a7476354e] Running
	I0717 19:17:37.467581  318888 system_pods.go:61] "etcd-pause-795576" [cb60766e-050b-459f-ab27-b4eb96c1cfb1] Running
	I0717 19:17:37.467586  318888 system_pods.go:61] "kindnet-blwth" [7367b120-9ad2-48ef-a098-f9427cd70ce7] Running
	I0717 19:17:37.467592  318888 system_pods.go:61] "kube-apiserver-pause-795576" [deacff2a-f4f5-4573-985b-f50aec648951] Running
	I0717 19:17:37.467597  318888 system_pods.go:61] "kube-controller-manager-pause-795576" [7fe105ea-5ec8-4082-8c94-109c5613c844] Running
	I0717 19:17:37.467603  318888 system_pods.go:61] "kube-proxy-vcv28" [543aec10-6af6-4088-941a-d684da877b3f] Running
	I0717 19:17:37.467608  318888 system_pods.go:61] "kube-scheduler-pause-795576" [282169f5-c63d-4d71-9dd5-180ca707ac61] Running
	I0717 19:17:37.467618  318888 system_pods.go:74] duration metric: took 185.280635ms to wait for pod list to return data ...
	I0717 19:17:37.467628  318888 default_sa.go:34] waiting for default service account to be created ...
	I0717 19:17:37.664184  318888 default_sa.go:45] found service account: "default"
	I0717 19:17:37.664211  318888 default_sa.go:55] duration metric: took 196.57685ms for default service account to be created ...
	I0717 19:17:37.664219  318888 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 19:17:37.867944  318888 system_pods.go:86] 7 kube-system pods found
	I0717 19:17:37.868007  318888 system_pods.go:89] "coredns-5d78c9869d-7bhk2" [113dbc11-1279-4188-b57f-ef1a7476354e] Running
	I0717 19:17:37.868020  318888 system_pods.go:89] "etcd-pause-795576" [cb60766e-050b-459f-ab27-b4eb96c1cfb1] Running
	I0717 19:17:37.868025  318888 system_pods.go:89] "kindnet-blwth" [7367b120-9ad2-48ef-a098-f9427cd70ce7] Running
	I0717 19:17:37.868032  318888 system_pods.go:89] "kube-apiserver-pause-795576" [deacff2a-f4f5-4573-985b-f50aec648951] Running
	I0717 19:17:37.868036  318888 system_pods.go:89] "kube-controller-manager-pause-795576" [7fe105ea-5ec8-4082-8c94-109c5613c844] Running
	I0717 19:17:37.868041  318888 system_pods.go:89] "kube-proxy-vcv28" [543aec10-6af6-4088-941a-d684da877b3f] Running
	I0717 19:17:37.868045  318888 system_pods.go:89] "kube-scheduler-pause-795576" [282169f5-c63d-4d71-9dd5-180ca707ac61] Running
	I0717 19:17:37.868051  318888 system_pods.go:126] duration metric: took 203.827832ms to wait for k8s-apps to be running ...
	I0717 19:17:37.868058  318888 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 19:17:37.868104  318888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:17:37.882536  318888 system_svc.go:56] duration metric: took 14.46342ms WaitForService to wait for kubelet.
	I0717 19:17:37.882566  318888 kubeadm.go:581] duration metric: took 2.982428447s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 19:17:37.882591  318888 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:17:38.064900  318888 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0717 19:17:38.064924  318888 node_conditions.go:123] node cpu capacity is 8
	I0717 19:17:38.064935  318888 node_conditions.go:105] duration metric: took 182.337085ms to run NodePressure ...
	I0717 19:17:38.064945  318888 start.go:228] waiting for startup goroutines ...
	I0717 19:17:38.064951  318888 start.go:233] waiting for cluster config update ...
	I0717 19:17:38.064958  318888 start.go:242] writing updated cluster config ...
	I0717 19:17:38.065224  318888 ssh_runner.go:195] Run: rm -f paused
	I0717 19:17:38.120289  318888 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 19:17:38.122981  318888 out.go:177] * Done! kubectl is now configured to use "pause-795576" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.189965051Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.190034948Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.219870677Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/c8b2e7919c165578172869b71ee2fb4ee5ee2cb3be7847b03e13b3bd86c4f451/merged/etc/passwd: no such file or directory"
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.219927612Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c8b2e7919c165578172869b71ee2fb4ee5ee2cb3be7847b03e13b3bd86c4f451/merged/etc/group: no such file or directory"
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.293482311Z" level=info msg="Created container c7d9925d4d034c86f811fcaa0fc3e82d9e6c6d2aa3586c572cb69b949b380aae: kube-system/kube-proxy-vcv28/kube-proxy" id=0c7c45e8-726c-4529-942f-15fb262b27eb name=/runtime.v1.RuntimeService/CreateContainer
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.293728547Z" level=info msg="Created container 50720c457cc27be585b0bcec78fc0350e552eacbc5e5d3985113f7cdfffb3ec1: kube-system/coredns-5d78c9869d-7bhk2/coredns" id=e687d198-1472-4548-b4c3-03717ded0a5d name=/runtime.v1.RuntimeService/CreateContainer
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.294247896Z" level=info msg="Starting container: 50720c457cc27be585b0bcec78fc0350e552eacbc5e5d3985113f7cdfffb3ec1" id=d133d45a-4093-424c-b314-2453e869b54c name=/runtime.v1.RuntimeService/StartContainer
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.294430824Z" level=info msg="Starting container: c7d9925d4d034c86f811fcaa0fc3e82d9e6c6d2aa3586c572cb69b949b380aae" id=ce4a81de-6905-4e59-94c4-f8d7989578e5 name=/runtime.v1.RuntimeService/StartContainer
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.294771190Z" level=info msg="Created container a5b342c3188d4dada0219660ad4c433081d03f89457a114d9d6e0e04ee02126e: kube-system/kindnet-blwth/kindnet-cni" id=f0a1b2f5-44e0-4d5e-be59-a97f99214b0f name=/runtime.v1.RuntimeService/CreateContainer
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.295189765Z" level=info msg="Starting container: a5b342c3188d4dada0219660ad4c433081d03f89457a114d9d6e0e04ee02126e" id=1ed90718-a316-4779-ba8a-9c9e9f40121a name=/runtime.v1.RuntimeService/StartContainer
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.304472948Z" level=info msg="Started container" PID=3717 containerID=50720c457cc27be585b0bcec78fc0350e552eacbc5e5d3985113f7cdfffb3ec1 description=kube-system/coredns-5d78c9869d-7bhk2/coredns id=d133d45a-4093-424c-b314-2453e869b54c name=/runtime.v1.RuntimeService/StartContainer sandboxID=d649adf698c9dafde02b8a12fb695beb81795107e7d027d64cadfd235bb2ac80
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.306525558Z" level=info msg="Started container" PID=3720 containerID=a5b342c3188d4dada0219660ad4c433081d03f89457a114d9d6e0e04ee02126e description=kube-system/kindnet-blwth/kindnet-cni id=1ed90718-a316-4779-ba8a-9c9e9f40121a name=/runtime.v1.RuntimeService/StartContainer sandboxID=52b5cc4aad2ad9be691effa49714cc8f6b39045961a40662dd74c5acc9780241
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.307693336Z" level=info msg="Started container" PID=3721 containerID=c7d9925d4d034c86f811fcaa0fc3e82d9e6c6d2aa3586c572cb69b949b380aae description=kube-system/kube-proxy-vcv28/kube-proxy id=ce4a81de-6905-4e59-94c4-f8d7989578e5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=be9d0f26dd7c3ab191a5abf36da714632cbd0f3cda9ce14b052bad43e9c67620
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.766821565Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.771178384Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.771213859Z" level=info msg="Updated default CNI network name to kindnet"
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.771232766Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.774766709Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.774795534Z" level=info msg="Updated default CNI network name to kindnet"
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.774813040Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.778219803Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.778248616Z" level=info msg="Updated default CNI network name to kindnet"
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.778260062Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.781582554Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.781609794Z" level=info msg="Updated default CNI network name to kindnet"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c7d9925d4d034       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c   16 seconds ago       Running             kube-proxy                1                   be9d0f26dd7c3       kube-proxy-vcv28
	50720c457cc27       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   16 seconds ago       Running             coredns                   1                   d649adf698c9d       coredns-5d78c9869d-7bhk2
	a5b342c3188d4       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da   16 seconds ago       Running             kindnet-cni               1                   52b5cc4aad2ad       kindnet-blwth
	03e6c6fd4ceca       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f   19 seconds ago       Running             kube-controller-manager   2                   20ae7a52e8589       kube-controller-manager-pause-795576
	cedfc31e11d52       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a   19 seconds ago       Running             kube-apiserver            2                   a59836c7236b4       kube-apiserver-pause-795576
	f2971fae983c3       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a   19 seconds ago       Running             kube-scheduler            3                   8c0aa2dd28d39       kube-scheduler-pause-795576
	b13a19d103774       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681   19 seconds ago       Running             etcd                      2                   977426b5ad0d4       etcd-pause-795576
	ab7184693b853       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a   22 seconds ago       Exited              kube-scheduler            2                   8c0aa2dd28d39       kube-scheduler-pause-795576
	3c4ecc96bf0b9       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a   34 seconds ago       Exited              kube-apiserver            1                   a59836c7236b4       kube-apiserver-pause-795576
	d061dfba20f3f       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f   34 seconds ago       Exited              kube-controller-manager   1                   20ae7a52e8589       kube-controller-manager-pause-795576
	f6dec920e96cf       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681   34 seconds ago       Exited              etcd                      1                   977426b5ad0d4       etcd-pause-795576
	e10be0e9af17f       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   43 seconds ago       Exited              coredns                   0                   d649adf698c9d       coredns-5d78c9869d-7bhk2
	6c2d784dbdd18       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c   About a minute ago   Exited              kube-proxy                0                   be9d0f26dd7c3       kube-proxy-vcv28
	249f2d6748858       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da   About a minute ago   Exited              kindnet-cni               0                   52b5cc4aad2ad       kindnet-blwth
	
	* 
	* ==> coredns [50720c457cc27be585b0bcec78fc0350e552eacbc5e5d3985113f7cdfffb3ec1] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50524 - 8696 "HINFO IN 4174031737280363131.579355201474769177. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.02890859s
	
	* 
	* ==> coredns [e10be0e9af17f8987e24e478a481f2c0799874fb1457f4c96c792fa327c1e1fe] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51811 - 41665 "HINFO IN 4362517523526051086.6698439147695089153. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010873973s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-795576
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-795576
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5
	                    minikube.k8s.io/name=pause-795576
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T19_16_11_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 19:16:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-795576
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 19:17:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 19:17:22 +0000   Mon, 17 Jul 2023 19:16:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 19:17:22 +0000   Mon, 17 Jul 2023 19:16:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 19:17:22 +0000   Mon, 17 Jul 2023 19:16:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 19:17:22 +0000   Mon, 17 Jul 2023 19:16:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    pause-795576
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	System Info:
	  Machine ID:                 18132160ceaa4c97a29b2e91cfb68c63
	  System UUID:                6a23a9c6-456f-460a-acc3-5ceeb9d277a9
	  Boot ID:                    72066744-0b12-457f-a61f-5086cdf4a210
	  Kernel Version:             5.15.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-7bhk2                100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     76s
	  kube-system                 etcd-pause-795576                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         89s
	  kube-system                 kindnet-blwth                           100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      76s
	  kube-system                 kube-apiserver-pause-795576             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 kube-controller-manager-pause-795576    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 kube-proxy-vcv28                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-scheduler-pause-795576             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 75s                kube-proxy       
	  Normal  Starting                 15s                kube-proxy       
	  Normal  NodeHasSufficientMemory  99s (x8 over 99s)  kubelet          Node pause-795576 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s (x8 over 99s)  kubelet          Node pause-795576 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     99s (x8 over 99s)  kubelet          Node pause-795576 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     90s                kubelet          Node pause-795576 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  90s                kubelet          Node pause-795576 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    90s                kubelet          Node pause-795576 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 90s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           77s                node-controller  Node pause-795576 event: Registered Node pause-795576 in Controller
	  Normal  NodeReady                45s                kubelet          Node pause-795576 status is now: NodeReady
	  Normal  Starting                 21s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20s (x8 over 21s)  kubelet          Node pause-795576 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x8 over 21s)  kubelet          Node pause-795576 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x8 over 21s)  kubelet          Node pause-795576 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5s                 node-controller  Node pause-795576 event: Registered Node pause-795576 in Controller
	
	* 
	* ==> dmesg <==
	* [  +4.255707] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-743d16d82889
	[  +0.000007] ll header: 00000000: 02 42 b6 c0 17 7b 02 42 c0 a8 3a 02 08 00
	[  +8.191422] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-743d16d82889
	[  +0.000024] ll header: 00000000: 02 42 b6 c0 17 7b 02 42 c0 a8 3a 02 08 00
	[Jul17 19:08] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-743d16d82889
	[  +0.000009] ll header: 00000000: 02 42 b6 c0 17 7b 02 42 c0 a8 3a 02 08 00
	[  +1.009828] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-743d16d82889
	[  +0.000006] ll header: 00000000: 02 42 b6 c0 17 7b 02 42 c0 a8 3a 02 08 00
	[  +2.015844] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-743d16d82889
	[  +0.000006] ll header: 00000000: 02 42 b6 c0 17 7b 02 42 c0 a8 3a 02 08 00
	[  +4.219847] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-743d16d82889
	[  +0.000006] ll header: 00000000: 02 42 b6 c0 17 7b 02 42 c0 a8 3a 02 08 00
	[  +8.195274] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-743d16d82889
	[  +0.000007] ll header: 00000000: 02 42 b6 c0 17 7b 02 42 c0 a8 3a 02 08 00
	[Jul17 19:11] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-f7333153fb0a
	[  +0.000009] ll header: 00000000: 02 42 f3 7a f9 00 02 42 c0 a8 43 02 08 00
	[  +1.022155] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-f7333153fb0a
	[  +0.000006] ll header: 00000000: 02 42 f3 7a f9 00 02 42 c0 a8 43 02 08 00
	[  +2.011847] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-f7333153fb0a
	[  +0.000028] ll header: 00000000: 02 42 f3 7a f9 00 02 42 c0 a8 43 02 08 00
	[  +4.159649] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-f7333153fb0a
	[  +0.000006] ll header: 00000000: 02 42 f3 7a f9 00 02 42 c0 a8 43 02 08 00
	[  +8.195411] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-f7333153fb0a
	[  +0.000006] ll header: 00000000: 02 42 f3 7a f9 00 02 42 c0 a8 43 02 08 00
	[Jul17 19:14] process 'docker/tmp/qemu-check188754489/check' started with executable stack
	
	* 
	* ==> etcd [b13a19d103774971cbc8e8ba48f201f85edbf7be76691036eb06342cc2c22061] <==
	* {"level":"info","ts":"2023-07-17T19:17:19.837Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-17T19:17:19.837Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-17T19:17:19.837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2023-07-17T19:17:19.837Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2023-07-17T19:17:19.838Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T19:17:19.838Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T19:17:19.839Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-07-17T19:17:19.840Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-17T19:17:19.840Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-17T19:17:19.840Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-07-17T19:17:19.840Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-07-17T19:17:21.115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2023-07-17T19:17:21.115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-07-17T19:17:21.115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2023-07-17T19:17:21.115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2023-07-17T19:17:21.115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-07-17T19:17:21.115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2023-07-17T19:17:21.115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-07-17T19:17:21.230Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:pause-795576 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-17T19:17:21.230Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T19:17:21.230Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T19:17:21.231Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-17T19:17:21.231Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-17T19:17:21.232Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-07-17T19:17:21.232Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> etcd [f6dec920e96cf327838fec0d74079f5434ac14535a8435b632e013e75fdb381f] <==
	* {"level":"info","ts":"2023-07-17T19:17:05.282Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"948.071µs"}
	{"level":"info","ts":"2023-07-17T19:17:05.284Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2023-07-17T19:17:05.294Z","caller":"etcdserver/raft.go:529","msg":"restarting local member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","commit-index":452}
	{"level":"info","ts":"2023-07-17T19:17:05.294Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=()"}
	{"level":"info","ts":"2023-07-17T19:17:05.294Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became follower at term 2"}
	{"level":"info","ts":"2023-07-17T19:17:05.294Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 8688e899f7831fc7 [peers: [], term: 2, commit: 452, applied: 0, lastindex: 452, lastterm: 2]"}
	{"level":"warn","ts":"2023-07-17T19:17:05.295Z","caller":"auth/store.go:1234","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2023-07-17T19:17:05.363Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":433}
	{"level":"info","ts":"2023-07-17T19:17:05.365Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2023-07-17T19:17:05.367Z","caller":"etcdserver/corrupt.go:95","msg":"starting initial corruption check","local-member-id":"8688e899f7831fc7","timeout":"7s"}
	{"level":"info","ts":"2023-07-17T19:17:05.367Z","caller":"etcdserver/corrupt.go:165","msg":"initial corruption checking passed; no corruption","local-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2023-07-17T19:17:05.367Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"8688e899f7831fc7","local-server-version":"3.5.7","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-07-17T19:17:05.367Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-17T19:17:05.367Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-07-17T19:17:05.367Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-17T19:17:05.367Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-17T19:17:05.368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2023-07-17T19:17:05.368Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2023-07-17T19:17:05.368Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T19:17:05.368Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T19:17:05.373Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-07-17T19:17:05.374Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-07-17T19:17:05.374Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-07-17T19:17:05.374Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-17T19:17:05.374Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	* 
	* ==> kernel <==
	*  19:17:39 up  4:00,  0 users,  load average: 4.37, 3.63, 2.33
	Linux pause-795576 5.15.0-1037-gcp #45~20.04.1-Ubuntu SMP Thu Jun 22 08:31:09 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [249f2d6748858a003aca4fb5b25038d813f220e6af0a35223e06266835417555] <==
	* I0717 19:16:23.969753       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0717 19:16:23.975535       1 main.go:107] hostIP = 192.168.67.2
	podIP = 192.168.67.2
	I0717 19:16:23.980786       1 main.go:116] setting mtu 1500 for CNI 
	I0717 19:16:23.980830       1 main.go:146] kindnetd IP family: "ipv4"
	I0717 19:16:23.980848       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0717 19:16:54.308764       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0717 19:16:54.324371       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0717 19:16:54.324500       1 main.go:227] handling current node
	
	* 
	* ==> kindnet [a5b342c3188d4dada0219660ad4c433081d03f89457a114d9d6e0e04ee02126e] <==
	* I0717 19:17:23.371239       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0717 19:17:23.371335       1 main.go:107] hostIP = 192.168.67.2
	podIP = 192.168.67.2
	I0717 19:17:23.371492       1 main.go:116] setting mtu 1500 for CNI 
	I0717 19:17:23.371507       1 main.go:146] kindnetd IP family: "ipv4"
	I0717 19:17:23.371541       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0717 19:17:23.766546       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0717 19:17:23.766572       1 main.go:227] handling current node
	I0717 19:17:33.785147       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0717 19:17:33.785173       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [3c4ecc96bf0b93221b1cd41bd5f1526256df3f33b13568ef0d2794d1eb83eca3] <==
	* I0717 19:17:05.492802       1 server.go:553] external host was not specified, using 192.168.67.2
	I0717 19:17:05.495140       1 server.go:166] Version: v1.27.3
	I0717 19:17:05.495187       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	* 
	* ==> kube-apiserver [cedfc31e11d529607b3ee5ca35c5cce028b584be899fe6d4a49d88a77aad3495] <==
	* I0717 19:17:22.391297       1 aggregator.go:150] waiting for initial CRD sync...
	I0717 19:17:22.391624       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0717 19:17:22.393082       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I0717 19:17:22.412061       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0717 19:17:22.499665       1 controller.go:155] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0717 19:17:22.566638       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0717 19:17:22.587330       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 19:17:22.591320       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0717 19:17:22.591343       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0717 19:17:22.591502       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0717 19:17:22.591524       1 aggregator.go:152] initial CRD sync complete...
	I0717 19:17:22.591531       1 autoregister_controller.go:141] Starting autoregister controller
	I0717 19:17:22.591537       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 19:17:22.591544       1 cache.go:39] Caches are synced for autoregister controller
	I0717 19:17:22.591699       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 19:17:22.593534       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0717 19:17:22.593673       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 19:17:22.662183       1 shared_informer.go:318] Caches are synced for configmaps
	I0717 19:17:23.140996       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0717 19:17:23.397058       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 19:17:24.593082       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0717 19:17:24.689958       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0717 19:17:24.698330       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0717 19:17:24.810171       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 19:17:24.819068       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-controller-manager [03e6c6fd4ceca064612bdbf851f465381b9ec0dc6e2ab6a3dca077888376c88f] <==
	* I0717 19:17:34.790868       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0717 19:17:34.790900       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0717 19:17:34.796130       1 shared_informer.go:318] Caches are synced for endpoint
	I0717 19:17:34.802002       1 shared_informer.go:318] Caches are synced for GC
	I0717 19:17:34.813319       1 shared_informer.go:318] Caches are synced for HPA
	I0717 19:17:34.817525       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0717 19:17:34.817562       1 shared_informer.go:318] Caches are synced for taint
	I0717 19:17:34.817702       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0717 19:17:34.817688       1 node_lifecycle_controller.go:1223] "Initializing eviction metric for zone" zone=""
	I0717 19:17:34.817758       1 taint_manager.go:211] "Sending events to api server"
	I0717 19:17:34.817811       1 event.go:307] "Event occurred" object="pause-795576" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-795576 event: Registered Node pause-795576 in Controller"
	I0717 19:17:34.817864       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-795576"
	I0717 19:17:34.817932       1 node_lifecycle_controller.go:1069] "Controller detected that zone is now in new state" zone="" newState=Normal
	I0717 19:17:34.820060       1 shared_informer.go:318] Caches are synced for crt configmap
	I0717 19:17:34.822381       1 shared_informer.go:318] Caches are synced for job
	I0717 19:17:34.845101       1 shared_informer.go:318] Caches are synced for daemon sets
	I0717 19:17:34.888241       1 shared_informer.go:318] Caches are synced for stateful set
	I0717 19:17:34.906579       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0717 19:17:34.924305       1 shared_informer.go:318] Caches are synced for deployment
	I0717 19:17:34.949261       1 shared_informer.go:318] Caches are synced for disruption
	I0717 19:17:34.982902       1 shared_informer.go:318] Caches are synced for resource quota
	I0717 19:17:34.997419       1 shared_informer.go:318] Caches are synced for resource quota
	I0717 19:17:35.313169       1 shared_informer.go:318] Caches are synced for garbage collector
	I0717 19:17:35.313204       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0717 19:17:35.329674       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-controller-manager [d061dfba20f3f98ca0245fe20749ce94924338455dffdddfee06f3279e0c6ff8] <==
	* 
	* 
	* ==> kube-proxy [6c2d784dbdd1891bc8ee8488658197313688eb7036abdb4530230dcf2eccb3fa] <==
	* I0717 19:16:24.216895       1 node.go:141] Successfully retrieved node IP: 192.168.67.2
	I0717 19:16:24.217025       1 server_others.go:110] "Detected node IP" address="192.168.67.2"
	I0717 19:16:24.217063       1 server_others.go:554] "Using iptables proxy"
	I0717 19:16:24.417185       1 server_others.go:192] "Using iptables Proxier"
	I0717 19:16:24.417300       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0717 19:16:24.417337       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0717 19:16:24.417381       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0717 19:16:24.417438       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 19:16:24.418155       1 server.go:658] "Version info" version="v1.27.3"
	I0717 19:16:24.418441       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 19:16:24.419037       1 config.go:188] "Starting service config controller"
	I0717 19:16:24.419125       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 19:16:24.419074       1 config.go:97] "Starting endpoint slice config controller"
	I0717 19:16:24.420055       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 19:16:24.419460       1 config.go:315] "Starting node config controller"
	I0717 19:16:24.420155       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 19:16:24.519240       1 shared_informer.go:318] Caches are synced for service config
	I0717 19:16:24.522185       1 shared_informer.go:318] Caches are synced for node config
	I0717 19:16:24.522328       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [c7d9925d4d034c86f811fcaa0fc3e82d9e6c6d2aa3586c572cb69b949b380aae] <==
	* I0717 19:17:23.482651       1 node.go:141] Successfully retrieved node IP: 192.168.67.2
	I0717 19:17:23.482745       1 server_others.go:110] "Detected node IP" address="192.168.67.2"
	I0717 19:17:23.482776       1 server_others.go:554] "Using iptables proxy"
	I0717 19:17:23.503860       1 server_others.go:192] "Using iptables Proxier"
	I0717 19:17:23.503912       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0717 19:17:23.503927       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0717 19:17:23.503943       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0717 19:17:23.504056       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 19:17:23.504781       1 server.go:658] "Version info" version="v1.27.3"
	I0717 19:17:23.504801       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 19:17:23.505456       1 config.go:188] "Starting service config controller"
	I0717 19:17:23.505487       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 19:17:23.505537       1 config.go:315] "Starting node config controller"
	I0717 19:17:23.505555       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 19:17:23.505649       1 config.go:97] "Starting endpoint slice config controller"
	I0717 19:17:23.505892       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 19:17:23.606224       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0717 19:17:23.606325       1 shared_informer.go:318] Caches are synced for node config
	I0717 19:17:23.606333       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [ab7184693b8535872a6449bd84279882db6966e0d108be297584389fcbd446cd] <==
	* I0717 19:17:17.222310       1 serving.go:348] Generated self-signed cert in-memory
	W0717 19:17:17.444746       1 authentication.go:368] Error looking up in-cluster authentication configuration: Get "https://192.168.67.2:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.67.2:8443: connect: connection refused
	W0717 19:17:17.444795       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 19:17:17.444803       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 19:17:17.447621       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.3"
	I0717 19:17:17.447646       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 19:17:17.448801       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 19:17:17.448841       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0717 19:17:17.448865       1 shared_informer.go:314] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 19:17:17.448877       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 19:17:17.449448       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0717 19:17:17.449470       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 19:17:17.449486       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0717 19:17:17.449645       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [f2971fae983c36039ba84ce578e2ef2b500468b090b10b9309dfd36f30cb0e41] <==
	* I0717 19:17:20.581449       1 serving.go:348] Generated self-signed cert in-memory
	W0717 19:17:22.470452       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 19:17:22.470489       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 19:17:22.470504       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 19:17:22.470517       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 19:17:22.567509       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.3"
	I0717 19:17:22.567620       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 19:17:22.570174       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 19:17:22.570281       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 19:17:22.570809       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0717 19:17:22.571433       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 19:17:22.670820       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Jul 17 19:17:20 pause-795576 kubelet[3413]: I0717 19:17:20.387492    3413 kubelet_node_status.go:70] "Attempting to register node" node="pause-795576"
	Jul 17 19:17:22 pause-795576 kubelet[3413]: I0717 19:17:22.588489    3413 kubelet_node_status.go:108] "Node was previously registered" node="pause-795576"
	Jul 17 19:17:22 pause-795576 kubelet[3413]: I0717 19:17:22.588599    3413 kubelet_node_status.go:73] "Successfully registered node" node="pause-795576"
	Jul 17 19:17:22 pause-795576 kubelet[3413]: I0717 19:17:22.590307    3413 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 17 19:17:22 pause-795576 kubelet[3413]: I0717 19:17:22.591362    3413 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 17 19:17:22 pause-795576 kubelet[3413]: I0717 19:17:22.876082    3413 apiserver.go:52] "Watching apiserver"
	Jul 17 19:17:22 pause-795576 kubelet[3413]: I0717 19:17:22.879570    3413 topology_manager.go:212] "Topology Admit Handler"
	Jul 17 19:17:22 pause-795576 kubelet[3413]: I0717 19:17:22.880108    3413 topology_manager.go:212] "Topology Admit Handler"
	Jul 17 19:17:22 pause-795576 kubelet[3413]: I0717 19:17:22.880226    3413 topology_manager.go:212] "Topology Admit Handler"
	Jul 17 19:17:22 pause-795576 kubelet[3413]: I0717 19:17:22.962231    3413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/543aec10-6af6-4088-941a-d684da877b3f-kube-proxy\") pod \"kube-proxy-vcv28\" (UID: \"543aec10-6af6-4088-941a-d684da877b3f\") " pod="kube-system/kube-proxy-vcv28"
	Jul 17 19:17:22 pause-795576 kubelet[3413]: I0717 19:17:22.962302    3413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/113dbc11-1279-4188-b57f-ef1a7476354e-config-volume\") pod \"coredns-5d78c9869d-7bhk2\" (UID: \"113dbc11-1279-4188-b57f-ef1a7476354e\") " pod="kube-system/coredns-5d78c9869d-7bhk2"
	Jul 17 19:17:22 pause-795576 kubelet[3413]: I0717 19:17:22.962334    3413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/543aec10-6af6-4088-941a-d684da877b3f-xtables-lock\") pod \"kube-proxy-vcv28\" (UID: \"543aec10-6af6-4088-941a-d684da877b3f\") " pod="kube-system/kube-proxy-vcv28"
	Jul 17 19:17:22 pause-795576 kubelet[3413]: I0717 19:17:22.962361    3413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/543aec10-6af6-4088-941a-d684da877b3f-lib-modules\") pod \"kube-proxy-vcv28\" (UID: \"543aec10-6af6-4088-941a-d684da877b3f\") " pod="kube-system/kube-proxy-vcv28"
	Jul 17 19:17:22 pause-795576 kubelet[3413]: I0717 19:17:22.962388    3413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh7kg\" (UniqueName: \"kubernetes.io/projected/543aec10-6af6-4088-941a-d684da877b3f-kube-api-access-hh7kg\") pod \"kube-proxy-vcv28\" (UID: \"543aec10-6af6-4088-941a-d684da877b3f\") " pod="kube-system/kube-proxy-vcv28"
	Jul 17 19:17:22 pause-795576 kubelet[3413]: I0717 19:17:22.962416    3413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8bj6\" (UniqueName: \"kubernetes.io/projected/113dbc11-1279-4188-b57f-ef1a7476354e-kube-api-access-k8bj6\") pod \"coredns-5d78c9869d-7bhk2\" (UID: \"113dbc11-1279-4188-b57f-ef1a7476354e\") " pod="kube-system/coredns-5d78c9869d-7bhk2"
	Jul 17 19:17:22 pause-795576 kubelet[3413]: I0717 19:17:22.980306    3413 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world"
	Jul 17 19:17:23 pause-795576 kubelet[3413]: E0717 19:17:23.000618    3413 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-795576\" already exists" pod="kube-system/kube-apiserver-pause-795576"
	Jul 17 19:17:23 pause-795576 kubelet[3413]: I0717 19:17:23.063170    3413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7367b120-9ad2-48ef-a098-f9427cd70ce7-xtables-lock\") pod \"kindnet-blwth\" (UID: \"7367b120-9ad2-48ef-a098-f9427cd70ce7\") " pod="kube-system/kindnet-blwth"
	Jul 17 19:17:23 pause-795576 kubelet[3413]: I0717 19:17:23.063238    3413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7367b120-9ad2-48ef-a098-f9427cd70ce7-lib-modules\") pod \"kindnet-blwth\" (UID: \"7367b120-9ad2-48ef-a098-f9427cd70ce7\") " pod="kube-system/kindnet-blwth"
	Jul 17 19:17:23 pause-795576 kubelet[3413]: I0717 19:17:23.063477    3413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7367b120-9ad2-48ef-a098-f9427cd70ce7-cni-cfg\") pod \"kindnet-blwth\" (UID: \"7367b120-9ad2-48ef-a098-f9427cd70ce7\") " pod="kube-system/kindnet-blwth"
	Jul 17 19:17:23 pause-795576 kubelet[3413]: I0717 19:17:23.063542    3413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl564\" (UniqueName: \"kubernetes.io/projected/7367b120-9ad2-48ef-a098-f9427cd70ce7-kube-api-access-cl564\") pod \"kindnet-blwth\" (UID: \"7367b120-9ad2-48ef-a098-f9427cd70ce7\") " pod="kube-system/kindnet-blwth"
	Jul 17 19:17:23 pause-795576 kubelet[3413]: I0717 19:17:23.063611    3413 reconciler.go:41] "Reconciler: start to sync state"
	Jul 17 19:17:23 pause-795576 kubelet[3413]: I0717 19:17:23.180857    3413 scope.go:115] "RemoveContainer" containerID="249f2d6748858a003aca4fb5b25038d813f220e6af0a35223e06266835417555"
	Jul 17 19:17:23 pause-795576 kubelet[3413]: I0717 19:17:23.184076    3413 scope.go:115] "RemoveContainer" containerID="e10be0e9af17f8987e24e478a481f2c0799874fb1457f4c96c792fa327c1e1fe"
	Jul 17 19:17:23 pause-795576 kubelet[3413]: I0717 19:17:23.184711    3413 scope.go:115] "RemoveContainer" containerID="6c2d784dbdd1891bc8ee8488658197313688eb7036abdb4530230dcf2eccb3fa"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-795576 -n pause-795576
helpers_test.go:261: (dbg) Run:  kubectl --context pause-795576 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-795576
helpers_test.go:235: (dbg) docker inspect pause-795576:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b41214069f99263d678e55389354861a53d0485040b7e5f3a65f045ba3bed2d3",
	        "Created": "2023-07-17T19:15:49.103413251Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 299832,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T19:15:49.436435024Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/b41214069f99263d678e55389354861a53d0485040b7e5f3a65f045ba3bed2d3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b41214069f99263d678e55389354861a53d0485040b7e5f3a65f045ba3bed2d3/hostname",
	        "HostsPath": "/var/lib/docker/containers/b41214069f99263d678e55389354861a53d0485040b7e5f3a65f045ba3bed2d3/hosts",
	        "LogPath": "/var/lib/docker/containers/b41214069f99263d678e55389354861a53d0485040b7e5f3a65f045ba3bed2d3/b41214069f99263d678e55389354861a53d0485040b7e5f3a65f045ba3bed2d3-json.log",
	        "Name": "/pause-795576",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-795576:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-795576",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/107bf35e8ddc6283bec58cb350be16e6cc2c143f61e57c6923c1a6d71f2cc2cd-init/diff:/var/lib/docker/overlay2/d8b40fcaabfbbb6eb20cfe7c35f752b4babaa96b29803507d5f63d9939e9e0f0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/107bf35e8ddc6283bec58cb350be16e6cc2c143f61e57c6923c1a6d71f2cc2cd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/107bf35e8ddc6283bec58cb350be16e6cc2c143f61e57c6923c1a6d71f2cc2cd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/107bf35e8ddc6283bec58cb350be16e6cc2c143f61e57c6923c1a6d71f2cc2cd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-795576",
	                "Source": "/var/lib/docker/volumes/pause-795576/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-795576",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-795576",
	                "name.minikube.sigs.k8s.io": "pause-795576",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "821d2b3df00169f19368e5d62df45fb50dabe902374bbdce37f18f99cfa644c3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32948"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32947"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32944"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32946"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32945"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/821d2b3df001",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-795576": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b41214069f99",
	                        "pause-795576"
	                    ],
	                    "NetworkID": "61bb7c620e400c28091a747ba0fe9ed8a58ea2f099bdaf767519ccfad62d2f34",
	                    "EndpointID": "3ff405adca2912a7b828b70034ffd15b4dd2933d37631540b521d0907702e55a",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-795576 -n pause-795576
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-795576 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-795576 logs -n 25: (1.505406181s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-404036             | NoKubernetes-404036       | jenkins | v1.30.1 | 17 Jul 23 19:14 UTC | 17 Jul 23 19:15 UTC |
	|         | --no-kubernetes                    |                           |         |         |                     |                     |
	|         | --driver=docker                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-404036             | NoKubernetes-404036       | jenkins | v1.30.1 | 17 Jul 23 19:15 UTC | 17 Jul 23 19:15 UTC |
	| start   | -p NoKubernetes-404036             | NoKubernetes-404036       | jenkins | v1.30.1 | 17 Jul 23 19:15 UTC | 17 Jul 23 19:15 UTC |
	|         | --no-kubernetes                    |                           |         |         |                     |                     |
	|         | --driver=docker                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-404036 sudo        | NoKubernetes-404036       | jenkins | v1.30.1 | 17 Jul 23 19:15 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-404036             | NoKubernetes-404036       | jenkins | v1.30.1 | 17 Jul 23 19:15 UTC | 17 Jul 23 19:15 UTC |
	| start   | -p NoKubernetes-404036             | NoKubernetes-404036       | jenkins | v1.30.1 | 17 Jul 23 19:15 UTC | 17 Jul 23 19:15 UTC |
	|         | --driver=docker                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-404036 sudo        | NoKubernetes-404036       | jenkins | v1.30.1 | 17 Jul 23 19:15 UTC |                     |
	|         | systemctl is-active --quiet        |                           |         |         |                     |                     |
	|         | service kubelet                    |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-404036             | NoKubernetes-404036       | jenkins | v1.30.1 | 17 Jul 23 19:15 UTC | 17 Jul 23 19:15 UTC |
	| start   | -p force-systemd-env-020920        | force-systemd-env-020920  | jenkins | v1.30.1 | 17 Jul 23 19:15 UTC | 17 Jul 23 19:15 UTC |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker               |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p offline-crio-369384             | offline-crio-369384       | jenkins | v1.30.1 | 17 Jul 23 19:15 UTC | 17 Jul 23 19:15 UTC |
	| start   | -p pause-795576 --memory=2048      | pause-795576              | jenkins | v1.30.1 | 17 Jul 23 19:15 UTC | 17 Jul 23 19:16 UTC |
	|         | --install-addons=false             |                           |         |         |                     |                     |
	|         | --wait=all --driver=docker         |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p running-upgrade-383497          | running-upgrade-383497    | jenkins | v1.30.1 | 17 Jul 23 19:15 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker               |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-383497          | running-upgrade-383497    | jenkins | v1.30.1 | 17 Jul 23 19:15 UTC | 17 Jul 23 19:15 UTC |
	| delete  | -p force-systemd-env-020920        | force-systemd-env-020920  | jenkins | v1.30.1 | 17 Jul 23 19:15 UTC | 17 Jul 23 19:15 UTC |
	| start   | -p stopped-upgrade-435958          | stopped-upgrade-435958    | jenkins | v1.30.1 | 17 Jul 23 19:15 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker               |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p force-systemd-flag-811473       | force-systemd-flag-811473 | jenkins | v1.30.1 | 17 Jul 23 19:15 UTC | 17 Jul 23 19:16 UTC |
	|         | --memory=2048 --force-systemd      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker               |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-435958          | stopped-upgrade-435958    | jenkins | v1.30.1 | 17 Jul 23 19:16 UTC | 17 Jul 23 19:16 UTC |
	| start   | -p kubernetes-upgrade-677764       | kubernetes-upgrade-677764 | jenkins | v1.30.1 | 17 Jul 23 19:16 UTC | 17 Jul 23 19:17 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0       |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker               |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-811473 ssh cat  | force-systemd-flag-811473 | jenkins | v1.30.1 | 17 Jul 23 19:16 UTC | 17 Jul 23 19:16 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-811473       | force-systemd-flag-811473 | jenkins | v1.30.1 | 17 Jul 23 19:16 UTC | 17 Jul 23 19:16 UTC |
	| start   | -p cert-expiration-383715          | cert-expiration-383715    | jenkins | v1.30.1 | 17 Jul 23 19:16 UTC | 17 Jul 23 19:16 UTC |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --cert-expiration=3m               |                           |         |         |                     |                     |
	|         | --driver=docker                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p pause-795576                    | pause-795576              | jenkins | v1.30.1 | 17 Jul 23 19:16 UTC | 17 Jul 23 19:17 UTC |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker               |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-677764       | kubernetes-upgrade-677764 | jenkins | v1.30.1 | 17 Jul 23 19:17 UTC | 17 Jul 23 19:17 UTC |
	| start   | -p kubernetes-upgrade-677764       | kubernetes-upgrade-677764 | jenkins | v1.30.1 | 17 Jul 23 19:17 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3       |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker               |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p missing-upgrade-629154          | missing-upgrade-629154    | jenkins | v1.30.1 | 17 Jul 23 19:17 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker               |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 19:17:04
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 19:17:04.135087  320967 out.go:296] Setting OutFile to fd 1 ...
	I0717 19:17:04.135207  320967 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:17:04.135219  320967 out.go:309] Setting ErrFile to fd 2...
	I0717 19:17:04.135225  320967 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:17:04.135448  320967 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-138069/.minikube/bin
	I0717 19:17:04.136085  320967 out.go:303] Setting JSON to false
	I0717 19:17:04.137930  320967 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":14375,"bootTime":1689607049,"procs":746,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 19:17:04.138007  320967 start.go:138] virtualization: kvm guest
	I0717 19:17:04.142980  320967 out.go:177] * [missing-upgrade-629154] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 19:17:04.145501  320967 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 19:17:04.145502  320967 notify.go:220] Checking for updates...
	I0717 19:17:04.147300  320967 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:17:04.149151  320967 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-138069/kubeconfig
	I0717 19:17:04.150918  320967 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-138069/.minikube
	I0717 19:17:04.152658  320967 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 19:17:04.154300  320967 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 19:17:04.156259  320967 config.go:182] Loaded profile config "missing-upgrade-629154": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0717 19:17:04.156287  320967 start_flags.go:695] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 19:17:04.158301  320967 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0717 19:17:04.159849  320967 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 19:17:04.185413  320967 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 19:17:04.185545  320967 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 19:17:04.247451  320967 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:66 SystemTime:2023-07-17 19:17:04.238013565 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 19:17:04.247563  320967 docker.go:294] overlay module found
	I0717 19:17:04.250184  320967 out.go:177] * Using the docker driver based on existing profile
	I0717 19:17:04.252070  320967 start.go:298] selected driver: docker
	I0717 19:17:04.252089  320967 start.go:880] validating driver "docker" against &{Name:missing-upgrade-629154 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:missing-upgrade-629154 Namespace: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:17:04.252203  320967 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 19:17:04.253010  320967 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 19:17:04.317268  320967 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:66 SystemTime:2023-07-17 19:17:04.308322866 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 19:17:04.317672  320967 cni.go:84] Creating CNI manager for ""
	I0717 19:17:04.317704  320967 cni.go:130] EnableDefaultCNI is true, recommending bridge
	I0717 19:17:04.317716  320967 start_flags.go:319] config:
	{Name:missing-upgrade-629154 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:missing-upgrade-629154 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.3 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:17:04.320368  320967 out.go:177] * Starting control plane node missing-upgrade-629154 in cluster missing-upgrade-629154
	I0717 19:17:04.322085  320967 cache.go:122] Beginning downloading kic base image for docker with crio
	I0717 19:17:04.323813  320967 out.go:177] * Pulling base image ...
	I0717 19:17:04.325531  320967 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0717 19:17:04.325622  320967 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 19:17:04.342675  320967 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 19:17:04.342714  320967 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	W0717 19:17:04.353789  320967 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0717 19:17:04.354060  320967 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/missing-upgrade-629154/config.json ...
	I0717 19:17:04.354086  320967 cache.go:107] acquiring lock: {Name:mkf1a1130734b2d756a0657ef9722999f48d6c2b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:17:04.354134  320967 cache.go:107] acquiring lock: {Name:mkd212c5db1f99d1e2779ee03e5908ac3123cf12 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:17:04.354149  320967 cache.go:107] acquiring lock: {Name:mkdd7c36248d43a8ed2da602bcfcaf77d0ba431f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:17:04.354204  320967 cache.go:107] acquiring lock: {Name:mkba162517b3c0d46459927d0c5ebda7dc236b77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:17:04.354229  320967 cache.go:115] /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0717 19:17:04.354255  320967 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 171.345µs
	I0717 19:17:04.354273  320967 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0717 19:17:04.354111  320967 cache.go:107] acquiring lock: {Name:mk99778cf263ded15bef16af944ba7e5e1c2f1a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:17:04.354262  320967 cache.go:107] acquiring lock: {Name:mkd892d265197bba9d74c85569bdbefabd7a9143 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:17:04.354309  320967 cache.go:115] /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I0717 19:17:04.354200  320967 cache.go:107] acquiring lock: {Name:mkd71aeba8a963da4395dc7d2ffea751af49e924 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:17:04.354263  320967 cache.go:115] /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0717 19:17:04.354373  320967 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 232.908µs
	I0717 19:17:04.354385  320967 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0717 19:17:04.354319  320967 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 221.217µs
	I0717 19:17:04.354389  320967 cache.go:115] /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I0717 19:17:04.354396  320967 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I0717 19:17:04.354396  320967 cache.go:115] /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I0717 19:17:04.354381  320967 cache.go:115] /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I0717 19:17:04.354405  320967 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 226.076µs
	I0717 19:17:04.354425  320967 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I0717 19:17:04.354420  320967 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 288.46µs
	I0717 19:17:04.354440  320967 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I0717 19:17:04.354422  320967 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 206.181µs
	I0717 19:17:04.354451  320967 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I0717 19:17:04.354388  320967 cache.go:195] Successfully downloaded all kic artifacts
	I0717 19:17:04.354484  320967 start.go:365] acquiring machines lock for missing-upgrade-629154: {Name:mk53dbc5c92f6c951a7a8d7b78be05ad027a74a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:17:04.354264  320967 cache.go:115] /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I0717 19:17:04.354532  320967 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 329.473µs
	I0717 19:17:04.354546  320967 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I0717 19:17:04.354302  320967 cache.go:107] acquiring lock: {Name:mk0626aa4c32952c38431bc57a3be6531c251df4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 19:17:04.354601  320967 cache.go:115] /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I0717 19:17:04.354614  320967 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 355.291µs
	I0717 19:17:04.354628  320967 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I0717 19:17:04.354624  320967 start.go:369] acquired machines lock for "missing-upgrade-629154" in 119.135µs
	I0717 19:17:04.354639  320967 cache.go:87] Successfully saved all images to host disk.
	I0717 19:17:04.354652  320967 start.go:96] Skipping create...Using existing machine configuration
	I0717 19:17:04.354669  320967 fix.go:54] fixHost starting: m01
	I0717 19:17:04.354889  320967 cli_runner.go:164] Run: docker container inspect missing-upgrade-629154 --format={{.State.Status}}
	W0717 19:17:04.371087  320967 cli_runner.go:211] docker container inspect missing-upgrade-629154 --format={{.State.Status}} returned with exit code 1
	I0717 19:17:04.371160  320967 fix.go:102] recreateIfNeeded on missing-upgrade-629154: state= err=unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:04.371188  320967 fix.go:107] machineExists: false. err=machine does not exist
	I0717 19:17:04.374469  320967 out.go:177] * docker "missing-upgrade-629154" container is missing, will recreate.
	I0717 19:17:03.751876  318888 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:17:03.751901  318888 machine.go:91] provisioned docker machine in 6.025236942s
	I0717 19:17:03.751912  318888 start.go:300] post-start starting for "pause-795576" (driver="docker")
	I0717 19:17:03.751926  318888 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:17:03.752022  318888 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:17:03.752071  318888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-795576
	I0717 19:17:03.768715  318888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32948 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/pause-795576/id_rsa Username:docker}
	I0717 19:17:03.861844  318888 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:17:03.865227  318888 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 19:17:03.865254  318888 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 19:17:03.865262  318888 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 19:17:03.865268  318888 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 19:17:03.865279  318888 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-138069/.minikube/addons for local assets ...
	I0717 19:17:03.865329  318888 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-138069/.minikube/files for local assets ...
	I0717 19:17:03.865393  318888 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem -> 1448222.pem in /etc/ssl/certs
	I0717 19:17:03.865471  318888 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:17:03.873647  318888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem --> /etc/ssl/certs/1448222.pem (1708 bytes)
	I0717 19:17:03.899224  318888 start.go:303] post-start completed in 147.294688ms
	I0717 19:17:03.899306  318888 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 19:17:03.899354  318888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-795576
	I0717 19:17:03.918665  318888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32948 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/pause-795576/id_rsa Username:docker}
	I0717 19:17:04.008910  318888 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 19:17:04.013412  318888 fix.go:56] fixHost completed within 6.308108921s
	I0717 19:17:04.013439  318888 start.go:83] releasing machines lock for "pause-795576", held for 6.308164281s
	I0717 19:17:04.013588  318888 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-795576
	I0717 19:17:04.031642  318888 ssh_runner.go:195] Run: cat /version.json
	I0717 19:17:04.031705  318888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-795576
	I0717 19:17:04.031650  318888 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:17:04.031842  318888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-795576
	I0717 19:17:04.051022  318888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32948 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/pause-795576/id_rsa Username:docker}
	I0717 19:17:04.051420  318888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32948 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/pause-795576/id_rsa Username:docker}
	W0717 19:17:04.265682  318888 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 19:17:04.265776  318888 ssh_runner.go:195] Run: systemctl --version
	I0717 19:17:04.270368  318888 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:17:04.420251  318888 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 19:17:04.425006  318888 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:17:04.433398  318888 cni.go:227] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0717 19:17:04.433471  318888 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:17:04.442675  318888 cni.go:265] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 19:17:04.442700  318888 start.go:469] detecting cgroup driver to use...
	I0717 19:17:04.442745  318888 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 19:17:04.442790  318888 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:17:04.456211  318888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:17:04.466899  318888 docker.go:196] disabling cri-docker service (if available) ...
	I0717 19:17:04.466954  318888 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:17:04.480212  318888 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:17:04.491452  318888 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:17:04.595500  318888 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:17:04.708034  318888 docker.go:212] disabling docker service ...
	I0717 19:17:04.708096  318888 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:17:04.719635  318888 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:17:04.729876  318888 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:17:04.906491  318888 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:17:05.364372  318888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:17:05.379441  318888 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:17:05.399739  318888 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:17:05.399818  318888 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:05.473884  318888 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:17:05.473956  318888 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:05.485450  318888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:05.496941  318888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:05.561686  318888 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:17:05.571827  318888 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:17:05.581021  318888 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:17:05.589445  318888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:17:05.891923  318888 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:17:06.202089  318888 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:17:06.202151  318888 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:17:06.205824  318888 start.go:537] Will wait 60s for crictl version
	I0717 19:17:06.205882  318888 ssh_runner.go:195] Run: which crictl
	I0717 19:17:06.209229  318888 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:17:06.244306  318888 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0717 19:17:06.244386  318888 ssh_runner.go:195] Run: crio --version
	I0717 19:17:06.281634  318888 ssh_runner.go:195] Run: crio --version
	I0717 19:17:06.319722  318888 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.6 ...
	I0717 19:17:02.117237  319694 cli_runner.go:164] Run: docker start kubernetes-upgrade-677764
	I0717 19:17:02.435109  319694 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-677764 --format={{.State.Status}}
	I0717 19:17:02.452218  319694 kic.go:426] container "kubernetes-upgrade-677764" state is running.
	I0717 19:17:02.452677  319694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-677764
	I0717 19:17:02.471422  319694 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/kubernetes-upgrade-677764/config.json ...
	I0717 19:17:02.471859  319694 machine.go:88] provisioning docker machine ...
	I0717 19:17:02.471882  319694 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-677764"
	I0717 19:17:02.471930  319694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-677764
	I0717 19:17:02.491565  319694 main.go:141] libmachine: Using SSH client type: native
	I0717 19:17:02.492338  319694 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32974 <nil> <nil>}
	I0717 19:17:02.492369  319694 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-677764 && echo "kubernetes-upgrade-677764" | sudo tee /etc/hostname
	I0717 19:17:02.492937  319694 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45724->127.0.0.1:32974: read: connection reset by peer
	I0717 19:17:05.635399  319694 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-677764
	
	I0717 19:17:05.635473  319694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-677764
	I0717 19:17:05.653320  319694 main.go:141] libmachine: Using SSH client type: native
	I0717 19:17:05.653845  319694 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32974 <nil> <nil>}
	I0717 19:17:05.653875  319694 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-677764' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-677764/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-677764' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:17:05.784528  319694 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:17:05.784609  319694 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16890-138069/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-138069/.minikube}
	I0717 19:17:05.784645  319694 ubuntu.go:177] setting up certificates
	I0717 19:17:05.784658  319694 provision.go:83] configureAuth start
	I0717 19:17:05.784725  319694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-677764
	I0717 19:17:05.807192  319694 provision.go:138] copyHostCerts
	I0717 19:17:05.807273  319694 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-138069/.minikube/ca.pem, removing ...
	I0717 19:17:05.807285  319694 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-138069/.minikube/ca.pem
	I0717 19:17:05.807368  319694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-138069/.minikube/ca.pem (1078 bytes)
	I0717 19:17:05.807478  319694 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-138069/.minikube/cert.pem, removing ...
	I0717 19:17:05.807488  319694 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-138069/.minikube/cert.pem
	I0717 19:17:05.807531  319694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-138069/.minikube/cert.pem (1123 bytes)
	I0717 19:17:05.807601  319694 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-138069/.minikube/key.pem, removing ...
	I0717 19:17:05.807612  319694 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-138069/.minikube/key.pem
	I0717 19:17:05.807662  319694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-138069/.minikube/key.pem (1675 bytes)
	I0717 19:17:05.807793  319694 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-677764 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-677764]
	I0717 19:17:05.964068  319694 provision.go:172] copyRemoteCerts
	I0717 19:17:05.964155  319694 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:17:05.964206  319694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-677764
	I0717 19:17:05.984658  319694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32974 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/kubernetes-upgrade-677764/id_rsa Username:docker}
	I0717 19:17:06.082090  319694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:17:06.106944  319694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0717 19:17:06.133035  319694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 19:17:06.157656  319694 provision.go:86] duration metric: configureAuth took 372.978299ms
	I0717 19:17:06.157692  319694 ubuntu.go:193] setting minikube options for container-runtime
	I0717 19:17:06.157922  319694 config.go:182] Loaded profile config "kubernetes-upgrade-677764": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:17:06.158053  319694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-677764
	I0717 19:17:06.178158  319694 main.go:141] libmachine: Using SSH client type: native
	I0717 19:17:06.178846  319694 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32974 <nil> <nil>}
	I0717 19:17:06.178880  319694 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:17:06.469943  319694 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:17:06.469974  319694 machine.go:91] provisioned docker machine in 3.998099493s
	I0717 19:17:06.469987  319694 start.go:300] post-start starting for "kubernetes-upgrade-677764" (driver="docker")
	I0717 19:17:06.469999  319694 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:17:06.470080  319694 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:17:06.470131  319694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-677764
	I0717 19:17:06.497657  319694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32974 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/kubernetes-upgrade-677764/id_rsa Username:docker}
	I0717 19:17:06.593084  319694 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:17:06.596142  319694 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 19:17:06.596174  319694 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 19:17:06.596182  319694 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 19:17:06.596189  319694 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 19:17:06.596205  319694 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-138069/.minikube/addons for local assets ...
	I0717 19:17:06.596265  319694 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-138069/.minikube/files for local assets ...
	I0717 19:17:06.596352  319694 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem -> 1448222.pem in /etc/ssl/certs
	I0717 19:17:06.596461  319694 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:17:06.605044  319694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem --> /etc/ssl/certs/1448222.pem (1708 bytes)
	I0717 19:17:06.628368  319694 start.go:303] post-start completed in 158.364655ms
	I0717 19:17:06.628472  319694 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 19:17:06.628520  319694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-677764
	I0717 19:17:06.646743  319694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32974 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/kubernetes-upgrade-677764/id_rsa Username:docker}
	I0717 19:17:06.737085  319694 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 19:17:06.741399  319694 fix.go:56] fixHost completed within 4.655720225s
	I0717 19:17:06.741423  319694 start.go:83] releasing machines lock for "kubernetes-upgrade-677764", held for 4.655764905s
	I0717 19:17:06.741515  319694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-677764
	I0717 19:17:06.760401  319694 ssh_runner.go:195] Run: cat /version.json
	I0717 19:17:06.760443  319694 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:17:06.760454  319694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-677764
	I0717 19:17:06.760520  319694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-677764
	I0717 19:17:06.780209  319694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32974 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/kubernetes-upgrade-677764/id_rsa Username:docker}
	I0717 19:17:06.781542  319694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32974 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/kubernetes-upgrade-677764/id_rsa Username:docker}
	I0717 19:17:06.321748  318888 cli_runner.go:164] Run: docker network inspect pause-795576 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 19:17:06.340215  318888 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0717 19:17:06.344375  318888 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 19:17:06.344426  318888 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:17:06.385379  318888 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 19:17:06.385403  318888 crio.go:415] Images already preloaded, skipping extraction
	I0717 19:17:06.385455  318888 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:17:06.421516  318888 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 19:17:06.421539  318888 cache_images.go:84] Images are preloaded, skipping loading
	I0717 19:17:06.421596  318888 ssh_runner.go:195] Run: crio config
	I0717 19:17:06.465827  318888 cni.go:84] Creating CNI manager for ""
	I0717 19:17:06.465852  318888 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 19:17:06.465871  318888 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 19:17:06.465889  318888 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-795576 NodeName:pause-795576 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:17:06.466031  318888 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-795576"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:17:06.466104  318888 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=pause-795576 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:pause-795576 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 19:17:06.466152  318888 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 19:17:06.476366  318888 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:17:06.476449  318888 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:17:06.488299  318888 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (422 bytes)
	I0717 19:17:06.508305  318888 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:17:06.527773  318888 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
	I0717 19:17:06.548000  318888 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0717 19:17:06.551677  318888 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/pause-795576 for IP: 192.168.67.2
	I0717 19:17:06.551715  318888 certs.go:190] acquiring lock for shared ca certs: {Name:mk42196ce59710ebf500640671660e2f4656c84e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:17:06.551876  318888 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16890-138069/.minikube/ca.key
	I0717 19:17:06.551932  318888 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16890-138069/.minikube/proxy-client-ca.key
	I0717 19:17:06.552042  318888 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/pause-795576/client.key
	I0717 19:17:06.552136  318888 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/pause-795576/apiserver.key.c7fa3a9e
	I0717 19:17:06.552197  318888 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/pause-795576/proxy-client.key
	I0717 19:17:06.552352  318888 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/144822.pem (1338 bytes)
	W0717 19:17:06.552396  318888 certs.go:433] ignoring /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/144822_empty.pem, impossibly tiny 0 bytes
	I0717 19:17:06.552412  318888 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:17:06.552450  318888 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem (1078 bytes)
	I0717 19:17:06.552495  318888 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:17:06.552528  318888 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/key.pem (1675 bytes)
	I0717 19:17:06.552574  318888 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem (1708 bytes)
	I0717 19:17:06.553429  318888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/pause-795576/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 19:17:06.579049  318888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/pause-795576/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 19:17:06.602577  318888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/pause-795576/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:17:06.626285  318888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/pause-795576/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 19:17:06.650369  318888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:17:06.673739  318888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 19:17:06.698501  318888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:17:06.721122  318888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 19:17:06.744207  318888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:17:06.770536  318888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/certs/144822.pem --> /usr/share/ca-certificates/144822.pem (1338 bytes)
	I0717 19:17:06.795889  318888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem --> /usr/share/ca-certificates/1448222.pem (1708 bytes)
	I0717 19:17:06.818698  318888 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:17:06.836903  318888 ssh_runner.go:195] Run: openssl version
	I0717 19:17:06.842176  318888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:17:06.850633  318888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:17:06.853922  318888 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:46 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:17:06.853979  318888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:17:06.860335  318888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:17:06.869169  318888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144822.pem && ln -fs /usr/share/ca-certificates/144822.pem /etc/ssl/certs/144822.pem"
	I0717 19:17:06.880988  318888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144822.pem
	I0717 19:17:06.885329  318888 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:51 /usr/share/ca-certificates/144822.pem
	I0717 19:17:06.885399  318888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144822.pem
	I0717 19:17:06.892308  318888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144822.pem /etc/ssl/certs/51391683.0"
	I0717 19:17:06.901646  318888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1448222.pem && ln -fs /usr/share/ca-certificates/1448222.pem /etc/ssl/certs/1448222.pem"
	I0717 19:17:06.912255  318888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1448222.pem
	I0717 19:17:06.915775  318888 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:51 /usr/share/ca-certificates/1448222.pem
	I0717 19:17:06.915830  318888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1448222.pem
	I0717 19:17:06.922733  318888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1448222.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:17:06.931816  318888 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 19:17:06.935362  318888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:17:06.941436  318888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:17:06.947993  318888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:17:06.954695  318888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:17:06.962346  318888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:17:06.969081  318888 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:17:06.977340  318888 kubeadm.go:404] StartCluster: {Name:pause-795576 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:pause-795576 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage
-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:17:06.977502  318888 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:17:06.977552  318888 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:17:07.020731  318888 cri.go:89] found id: "3c4ecc96bf0b93221b1cd41bd5f1526256df3f33b13568ef0d2794d1eb83eca3"
	I0717 19:17:07.020755  318888 cri.go:89] found id: "d061dfba20f3f98ca0245fe20749ce94924338455dffdddfee06f3279e0c6ff8"
	I0717 19:17:07.020762  318888 cri.go:89] found id: "f6dec920e96cf327838fec0d74079f5434ac14535a8435b632e013e75fdb381f"
	I0717 19:17:07.020768  318888 cri.go:89] found id: "883de01fb9bb917f2b2c7d622bacd82c691a1416de9b63dd431f017be84c6eee"
	I0717 19:17:07.020773  318888 cri.go:89] found id: "e10be0e9af17f8987e24e478a481f2c0799874fb1457f4c96c792fa327c1e1fe"
	I0717 19:17:07.020778  318888 cri.go:89] found id: "6c2d784dbdd1891bc8ee8488658197313688eb7036abdb4530230dcf2eccb3fa"
	I0717 19:17:07.020784  318888 cri.go:89] found id: "249f2d6748858a003aca4fb5b25038d813f220e6af0a35223e06266835417555"
	I0717 19:17:07.020789  318888 cri.go:89] found id: ""
	I0717 19:17:07.020836  318888 ssh_runner.go:195] Run: sudo runc list -f json
	I0717 19:17:07.047824  318888 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"249f2d6748858a003aca4fb5b25038d813f220e6af0a35223e06266835417555","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/249f2d6748858a003aca4fb5b25038d813f220e6af0a35223e06266835417555/userdata","rootfs":"/var/lib/containers/storage/overlay/fd2aa9b207f49e48d0ff362c959bf5688104dfd9f16135423c4718b9aeebc107/merged","created":"2023-07-17T19:16:23.821869064Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ef1f98f0","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ef1f98f0\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMe
ssagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"249f2d6748858a003aca4fb5b25038d813f220e6af0a35223e06266835417555","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-07-17T19:16:23.737826652Z","io.kubernetes.cri-o.Image":"b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20230511-dc714da8","io.kubernetes.cri-o.ImageRef":"b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-blwth\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"7367b120-9ad2-48ef-a098-f9427cd70ce7\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-blwth_7367b120-9ad2-48ef-a098-f9427cd70ce7/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":
"/var/lib/containers/storage/overlay/fd2aa9b207f49e48d0ff362c959bf5688104dfd9f16135423c4718b9aeebc107/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-blwth_kube-system_7367b120-9ad2-48ef-a098-f9427cd70ce7_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/52b5cc4aad2ad9be691effa49714cc8f6b39045961a40662dd74c5acc9780241/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"52b5cc4aad2ad9be691effa49714cc8f6b39045961a40662dd74c5acc9780241","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-blwth_kube-system_7367b120-9ad2-48ef-a098-f9427cd70ce7_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"se
linux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/7367b120-9ad2-48ef-a098-f9427cd70ce7/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/7367b120-9ad2-48ef-a098-f9427cd70ce7/containers/kindnet-cni/0dcfda50\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/7367b120-9ad2-48ef-a098-f9427cd70ce7/volumes/kubernetes.io~projected/kube-api-access-cl564\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-blwth","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"7367b120-9ad2-48ef-a098-f9427cd70ce7"
,"kubernetes.io/config.seen":"2023-07-17T19:16:23.332665951Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3c4ecc96bf0b93221b1cd41bd5f1526256df3f33b13568ef0d2794d1eb83eca3","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/3c4ecc96bf0b93221b1cd41bd5f1526256df3f33b13568ef0d2794d1eb83eca3/userdata","rootfs":"/var/lib/containers/storage/overlay/984e04278704013a855ebd140b487dec437c7f2b88ad66ca0ad0ae3ccf7a5795/merged","created":"2023-07-17T19:17:05.090088778Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"88ae6cec","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"88ae6cec\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/de
v/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"3c4ecc96bf0b93221b1cd41bd5f1526256df3f33b13568ef0d2794d1eb83eca3","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-07-17T19:17:04.97761483Z","io.kubernetes.cri-o.Image":"08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.27.3","io.kubernetes.cri-o.ImageRef":"08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-795576\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b854cc24c9327d52e830e509c0b45f70\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-795576_b854cc24c9327d52e830e509c0b45f70/kube-apiserver/1.log","io.kubernetes.c
ri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/984e04278704013a855ebd140b487dec437c7f2b88ad66ca0ad0ae3ccf7a5795/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-795576_kube-system_b854cc24c9327d52e830e509c0b45f70_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/a59836c7236b4631707596f0175cb8e9117fee3121c48eec6988cf1f1d7d14d4/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"a59836c7236b4631707596f0175cb8e9117fee3121c48eec6988cf1f1d7d14d4","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-795576_kube-system_b854cc24c9327d52e830e509c0b45f70_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b854cc24c9327d52e830e509c0b45f70/co
ntainers/kube-apiserver/e2c9bf0d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b854cc24c9327d52e830e509c0b45f70/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"sel
inux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-pause-795576","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b854cc24c9327d52e830e509c0b45f70","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"b854cc24c9327d52e830e509c0b45f70","kubernetes.io/config.seen":"2023-07-17T19:16:00.717752512Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6c2d784dbdd1891bc8ee8488658197313688eb7036abdb4530230dcf2eccb3fa","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/6c2d784dbdd1891bc8ee8488658197313688eb7036abdb4530230dcf2eccb3fa/userdata","rootfs":"/var/lib/containers/storage/overlay/de01942e3c1323fdb872b4cd4d75c3b8f377b3b580bae9af93589ce307c636f7/merged","created":"2023-07-17T19:16:23.870191032Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"47638398","io.kubernetes.container.na
me":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"47638398\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"6c2d784dbdd1891bc8ee8488658197313688eb7036abdb4530230dcf2eccb3fa","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-07-17T19:16:23.762428287Z","io.kubernetes.cri-o.Image":"5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.27.3","io.kubernetes.cri-o.ImageRef":"5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c","io.kubernetes.cri-o.Labels":"{\"io.kub
ernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-vcv28\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"543aec10-6af6-4088-941a-d684da877b3f\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-vcv28_543aec10-6af6-4088-941a-d684da877b3f/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/de01942e3c1323fdb872b4cd4d75c3b8f377b3b580bae9af93589ce307c636f7/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-vcv28_kube-system_543aec10-6af6-4088-941a-d684da877b3f_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/be9d0f26dd7c3ab191a5abf36da714632cbd0f3cda9ce14b052bad43e9c67620/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"be9d0f26dd7c3ab191a5abf36da714632cbd0f3cda9ce14b052bad43e9c67620","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-vcv28_kube-system_543aec10-6af6-4088-941a-d684da877b3f_0","io.k
ubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/543aec10-6af6-4088-941a-d684da877b3f/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/543aec10-6af6-4088-941a-d684da877b3f/containers/kube-proxy/0aa973d9\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/543aec10-6af6-4088-941a-d684da877b3f/volumes/kubernetes.io~configmap/kube-proxy\",\"r
eadonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/543aec10-6af6-4088-941a-d684da877b3f/volumes/kubernetes.io~projected/kube-api-access-hh7kg\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-vcv28","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"543aec10-6af6-4088-941a-d684da877b3f","kubernetes.io/config.seen":"2023-07-17T19:16:23.331165044Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"883de01fb9bb917f2b2c7d622bacd82c691a1416de9b63dd431f017be84c6eee","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/883de01fb9bb917f2b2c7d622bacd82c691a1416de9b63dd431f017be84c6eee/userdata","rootfs":"/var/lib/containers/storage/overlay/642a656c15db83e8d642e2962a223fbbd43a29afb57a204f39604a8ee358de79/merged","created":"20
23-07-17T19:17:04.995896916Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"159e1046","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"159e1046\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"883de01fb9bb917f2b2c7d622bacd82c691a1416de9b63dd431f017be84c6eee","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-07-17T19:17:04.896943836Z","io.kubernetes.cri-o.Image":"41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-sch
eduler:v1.27.3","io.kubernetes.cri-o.ImageRef":"41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-795576\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"400d9ca1adcedd07ea455c43546148bb\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-795576_400d9ca1adcedd07ea455c43546148bb/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/642a656c15db83e8d642e2962a223fbbd43a29afb57a204f39604a8ee358de79/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-795576_kube-system_400d9ca1adcedd07ea455c43546148bb_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/8c0aa2dd28d39ba50cfb2072a76b03120b8e0f39d2e7bd70d851fe70c79305ce/userdata/resolv.conf","io.kube
rnetes.cri-o.SandboxID":"8c0aa2dd28d39ba50cfb2072a76b03120b8e0f39d2e7bd70d851fe70c79305ce","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-795576_kube-system_400d9ca1adcedd07ea455c43546148bb_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/400d9ca1adcedd07ea455c43546148bb/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/400d9ca1adcedd07ea455c43546148bb/containers/kube-scheduler/2ab0d4ac\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-pause-79
5576","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"400d9ca1adcedd07ea455c43546148bb","kubernetes.io/config.hash":"400d9ca1adcedd07ea455c43546148bb","kubernetes.io/config.seen":"2023-07-17T19:16:00.717755611Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d061dfba20f3f98ca0245fe20749ce94924338455dffdddfee06f3279e0c6ff8","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/d061dfba20f3f98ca0245fe20749ce94924338455dffdddfee06f3279e0c6ff8/userdata","rootfs":"/var/lib/containers/storage/overlay/efdd45245e1d01175642c7e1fe9efdd38e2efc70b57415517262a57f4d2a71a1/merged","created":"2023-07-17T19:17:05.080545496Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"97f28112","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes
.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"97f28112\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"d061dfba20f3f98ca0245fe20749ce94924338455dffdddfee06f3279e0c6ff8","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-07-17T19:17:04.966773768Z","io.kubernetes.cri-o.Image":"7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.27.3","io.kubernetes.cri-o.ImageRef":"7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-795576\",\"io.kuberne
tes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"1694f1546c77512884d0dfe3bf2a4ba0\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-795576_1694f1546c77512884d0dfe3bf2a4ba0/kube-controller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/efdd45245e1d01175642c7e1fe9efdd38e2efc70b57415517262a57f4d2a71a1/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-795576_kube-system_1694f1546c77512884d0dfe3bf2a4ba0_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/20ae7a52e858945587eb7f163d34f79bc2b9a6ce18aad1af8d65006001a8854c/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"20ae7a52e858945587eb7f163d34f79bc2b9a6ce18aad1af8d65006001a8854c","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-795576_kube-system_1694f1546c77512884d0dfe3bf2a4ba0_0","io.kube
rnetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/1694f1546c77512884d0dfe3bf2a4ba0/containers/kube-controller-manager/8a7a154a\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/1694f1546c77512884d0dfe3bf2a4ba0/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\
"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-795576","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"1694f1546c77512884d0dfe3bf2a4ba0","kubernetes.io/config.hash":"1694f1546c77512884d0dfe3bf2a4ba0","kubern
etes.io/config.seen":"2023-07-17T19:16:00.717754116Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e10be0e9af17f8987e24e478a481f2c0799874fb1457f4c96c792fa327c1e1fe","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/e10be0e9af17f8987e24e478a481f2c0799874fb1457f4c96c792fa327c1e1fe/userdata","rootfs":"/var/lib/containers/storage/overlay/b3bae204884484a6b35550971ac8a6e805769241762f4c3a7c9c308965995a04/merged","created":"2023-07-17T19:16:55.408378152Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"5bffbcbc","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.ter
minationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"5bffbcbc\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"e10be0e9af17f8987e24e478a481f2c0799874fb1457f4c96c792fa327c1e1fe","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-07-17T19:16:55.363851867Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","io.kubernetes.cri-o.ImageName":"
registry.k8s.io/coredns/coredns:v1.10.1","io.kubernetes.cri-o.ImageRef":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-5d78c9869d-7bhk2\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"113dbc11-1279-4188-b57f-ef1a7476354e\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-5d78c9869d-7bhk2_113dbc11-1279-4188-b57f-ef1a7476354e/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b3bae204884484a6b35550971ac8a6e805769241762f4c3a7c9c308965995a04/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-5d78c9869d-7bhk2_kube-system_113dbc11-1279-4188-b57f-ef1a7476354e_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/d649adf698c9dafde02b8a12fb695beb81795107e7d027d64cadfd235bb2ac80/userdata/resolv.conf","io.kubernetes.cri-o.S
andboxID":"d649adf698c9dafde02b8a12fb695beb81795107e7d027d64cadfd235bb2ac80","io.kubernetes.cri-o.SandboxName":"k8s_coredns-5d78c9869d-7bhk2_kube-system_113dbc11-1279-4188-b57f-ef1a7476354e_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/113dbc11-1279-4188-b57f-ef1a7476354e/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/113dbc11-1279-4188-b57f-ef1a7476354e/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/113dbc11-1279-4188-b57f-ef1a7476354e/containers/coredns/8db12501\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"
/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/113dbc11-1279-4188-b57f-ef1a7476354e/volumes/kubernetes.io~projected/kube-api-access-k8bj6\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-5d78c9869d-7bhk2","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"113dbc11-1279-4188-b57f-ef1a7476354e","kubernetes.io/config.seen":"2023-07-17T19:16:54.973191308Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f6dec920e96cf327838fec0d74079f5434ac14535a8435b632e013e75fdb381f","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/f6dec920e96cf327838fec0d74079f5434ac14535a8435b632e013e75fdb381f/userdata","rootfs":"/var/lib/containers/storage/overlay/346787948b55ebcb618ece9de2dd56e22018cfd47ef405d4603a6f740a88967c/merged","created":"2023-07-17T19:17:05.075290261Z","annotations":{"io.container.manager":"cri-o
","io.kubernetes.container.hash":"95733f07","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"95733f07\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"f6dec920e96cf327838fec0d74079f5434ac14535a8435b632e013e75fdb381f","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-07-17T19:17:04.925145157Z","io.kubernetes.cri-o.Image":"86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.7-0","io.kubernetes.cri-o.ImageRef":"86b6af7dd652c1b38118be1c338e9354b33469e69a218f
7e290a0ca5304ad681","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-795576\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"d64400546f98bb129596be581950ced8\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-795576_d64400546f98bb129596be581950ced8/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/346787948b55ebcb618ece9de2dd56e22018cfd47ef405d4603a6f740a88967c/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-795576_kube-system_d64400546f98bb129596be581950ced8_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/977426b5ad0d404749f9b90f6b18505fa16b074792252144b0b36642498b9e5c/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"977426b5ad0d404749f9b90f6b18505fa16b074792252144b0b36642498b9e5c","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-795576_kube-system_d644
00546f98bb129596be581950ced8_0","io.kubernetes.cri-o.SeccompProfilePath":"runtime/default","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/d64400546f98bb129596be581950ced8/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/d64400546f98bb129596be581950ced8/containers/etcd/cf0ef901\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-pause-795576","io.kubernetes.pod.namespace":"kube-sys
tem","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"d64400546f98bb129596be581950ced8","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"d64400546f98bb129596be581950ced8","kubernetes.io/config.seen":"2023-07-17T19:16:00.717746912Z","kubernetes.io/config.source":"file"},"owner":"root"}]
	I0717 19:17:07.048354  318888 cri.go:126] list returned 7 containers
	I0717 19:17:07.048372  318888 cri.go:129] container: {ID:249f2d6748858a003aca4fb5b25038d813f220e6af0a35223e06266835417555 Status:stopped}
	I0717 19:17:07.048392  318888 cri.go:135] skipping {249f2d6748858a003aca4fb5b25038d813f220e6af0a35223e06266835417555 stopped}: state = "stopped", want "paused"
	I0717 19:17:07.048406  318888 cri.go:129] container: {ID:3c4ecc96bf0b93221b1cd41bd5f1526256df3f33b13568ef0d2794d1eb83eca3 Status:stopped}
	I0717 19:17:07.048419  318888 cri.go:135] skipping {3c4ecc96bf0b93221b1cd41bd5f1526256df3f33b13568ef0d2794d1eb83eca3 stopped}: state = "stopped", want "paused"
	I0717 19:17:07.048429  318888 cri.go:129] container: {ID:6c2d784dbdd1891bc8ee8488658197313688eb7036abdb4530230dcf2eccb3fa Status:stopped}
	I0717 19:17:07.048437  318888 cri.go:135] skipping {6c2d784dbdd1891bc8ee8488658197313688eb7036abdb4530230dcf2eccb3fa stopped}: state = "stopped", want "paused"
	I0717 19:17:07.048447  318888 cri.go:129] container: {ID:883de01fb9bb917f2b2c7d622bacd82c691a1416de9b63dd431f017be84c6eee Status:stopped}
	I0717 19:17:07.048460  318888 cri.go:135] skipping {883de01fb9bb917f2b2c7d622bacd82c691a1416de9b63dd431f017be84c6eee stopped}: state = "stopped", want "paused"
	I0717 19:17:07.048475  318888 cri.go:129] container: {ID:d061dfba20f3f98ca0245fe20749ce94924338455dffdddfee06f3279e0c6ff8 Status:stopped}
	I0717 19:17:07.048486  318888 cri.go:135] skipping {d061dfba20f3f98ca0245fe20749ce94924338455dffdddfee06f3279e0c6ff8 stopped}: state = "stopped", want "paused"
	I0717 19:17:07.048493  318888 cri.go:129] container: {ID:e10be0e9af17f8987e24e478a481f2c0799874fb1457f4c96c792fa327c1e1fe Status:stopped}
	I0717 19:17:07.048505  318888 cri.go:135] skipping {e10be0e9af17f8987e24e478a481f2c0799874fb1457f4c96c792fa327c1e1fe stopped}: state = "stopped", want "paused"
	I0717 19:17:07.048515  318888 cri.go:129] container: {ID:f6dec920e96cf327838fec0d74079f5434ac14535a8435b632e013e75fdb381f Status:stopped}
	I0717 19:17:07.048523  318888 cri.go:135] skipping {f6dec920e96cf327838fec0d74079f5434ac14535a8435b632e013e75fdb381f stopped}: state = "stopped", want "paused"
	I0717 19:17:07.048577  318888 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:17:07.059554  318888 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 19:17:07.059576  318888 kubeadm.go:636] restartCluster start
	I0717 19:17:07.059630  318888 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:17:07.069388  318888 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:07.070401  318888 kubeconfig.go:92] found "pause-795576" server: "https://192.168.67.2:8443"
	I0717 19:17:07.071928  318888 kapi.go:59] client config for pause-795576: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/profiles/pause-795576/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/profiles/pause-795576/client.key", CAFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 19:17:07.072899  318888 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:17:07.081588  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:07.081647  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:07.091239  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	W0717 19:17:06.973034  319694 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 19:17:06.973134  319694 ssh_runner.go:195] Run: systemctl --version
	I0717 19:17:06.978177  319694 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:17:07.122315  319694 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 19:17:07.127418  319694 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:17:07.137647  319694 cni.go:227] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0717 19:17:07.137731  319694 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:17:07.145955  319694 cni.go:265] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 19:17:07.145978  319694 start.go:469] detecting cgroup driver to use...
	I0717 19:17:07.146010  319694 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 19:17:07.146059  319694 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:17:07.156888  319694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:17:07.167672  319694 docker.go:196] disabling cri-docker service (if available) ...
	I0717 19:17:07.167719  319694 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:17:07.179907  319694 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:17:07.190272  319694 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:17:07.264921  319694 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:17:07.340775  319694 docker.go:212] disabling docker service ...
	I0717 19:17:07.340853  319694 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:17:07.353566  319694 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:17:07.364478  319694 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:17:07.430548  319694 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:17:07.507034  319694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:17:07.517521  319694 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:17:07.532966  319694 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0717 19:17:07.533024  319694 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:07.542853  319694 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:17:07.542927  319694 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:07.551715  319694 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:07.560532  319694 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:07.569414  319694 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:17:07.578293  319694 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:17:07.588353  319694 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:17:07.597167  319694 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:17:07.669313  319694 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:17:07.781188  319694 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:17:07.781266  319694 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:17:07.784741  319694 start.go:537] Will wait 60s for crictl version
	I0717 19:17:07.784801  319694 ssh_runner.go:195] Run: which crictl
	I0717 19:17:07.787919  319694 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:17:07.821326  319694 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0717 19:17:07.821414  319694 ssh_runner.go:195] Run: crio --version
	I0717 19:17:07.855779  319694 ssh_runner.go:195] Run: crio --version
	I0717 19:17:07.896159  319694 out.go:177] * Preparing Kubernetes v1.27.3 on CRI-O 1.24.6 ...
	I0717 19:17:04.376078  320967 delete.go:124] DEMOLISHING missing-upgrade-629154 ...
	I0717 19:17:04.376210  320967 cli_runner.go:164] Run: docker container inspect missing-upgrade-629154 --format={{.State.Status}}
	W0717 19:17:04.391633  320967 cli_runner.go:211] docker container inspect missing-upgrade-629154 --format={{.State.Status}} returned with exit code 1
	W0717 19:17:04.391699  320967 stop.go:75] unable to get state: unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:04.391718  320967 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:04.392086  320967 cli_runner.go:164] Run: docker container inspect missing-upgrade-629154 --format={{.State.Status}}
	W0717 19:17:04.407621  320967 cli_runner.go:211] docker container inspect missing-upgrade-629154 --format={{.State.Status}} returned with exit code 1
	I0717 19:17:04.407712  320967 delete.go:82] Unable to get host status for missing-upgrade-629154, assuming it has already been deleted: state: unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:04.407773  320967 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-629154
	W0717 19:17:04.423177  320967 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-629154 returned with exit code 1
	I0717 19:17:04.423219  320967 kic.go:367] could not find the container missing-upgrade-629154 to remove it. will try anyways
	I0717 19:17:04.423266  320967 cli_runner.go:164] Run: docker container inspect missing-upgrade-629154 --format={{.State.Status}}
	W0717 19:17:04.441371  320967 cli_runner.go:211] docker container inspect missing-upgrade-629154 --format={{.State.Status}} returned with exit code 1
	W0717 19:17:04.441433  320967 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:04.441489  320967 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-629154 /bin/bash -c "sudo init 0"
	W0717 19:17:04.457872  320967 cli_runner.go:211] docker exec --privileged -t missing-upgrade-629154 /bin/bash -c "sudo init 0" returned with exit code 1
	I0717 19:17:04.457923  320967 oci.go:647] error shutdown missing-upgrade-629154: docker exec --privileged -t missing-upgrade-629154 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:05.458123  320967 cli_runner.go:164] Run: docker container inspect missing-upgrade-629154 --format={{.State.Status}}
	W0717 19:17:05.481145  320967 cli_runner.go:211] docker container inspect missing-upgrade-629154 --format={{.State.Status}} returned with exit code 1
	I0717 19:17:05.481233  320967 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:05.481247  320967 oci.go:661] temporary error: container missing-upgrade-629154 status is  but expect it to be exited
	I0717 19:17:05.481288  320967 retry.go:31] will retry after 430.085087ms: couldn't verify container is exited. %!v(MISSING): unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:05.911760  320967 cli_runner.go:164] Run: docker container inspect missing-upgrade-629154 --format={{.State.Status}}
	W0717 19:17:05.929551  320967 cli_runner.go:211] docker container inspect missing-upgrade-629154 --format={{.State.Status}} returned with exit code 1
	I0717 19:17:05.929629  320967 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:05.929642  320967 oci.go:661] temporary error: container missing-upgrade-629154 status is  but expect it to be exited
	I0717 19:17:05.929680  320967 retry.go:31] will retry after 610.025992ms: couldn't verify container is exited. %!v(MISSING): unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:06.540568  320967 cli_runner.go:164] Run: docker container inspect missing-upgrade-629154 --format={{.State.Status}}
	W0717 19:17:06.558719  320967 cli_runner.go:211] docker container inspect missing-upgrade-629154 --format={{.State.Status}} returned with exit code 1
	I0717 19:17:06.558782  320967 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:06.558811  320967 oci.go:661] temporary error: container missing-upgrade-629154 status is  but expect it to be exited
	I0717 19:17:06.558845  320967 retry.go:31] will retry after 1.175735401s: couldn't verify container is exited. %!v(MISSING): unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:07.735178  320967 cli_runner.go:164] Run: docker container inspect missing-upgrade-629154 --format={{.State.Status}}
	W0717 19:17:07.751601  320967 cli_runner.go:211] docker container inspect missing-upgrade-629154 --format={{.State.Status}} returned with exit code 1
	I0717 19:17:07.751672  320967 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:07.751688  320967 oci.go:661] temporary error: container missing-upgrade-629154 status is  but expect it to be exited
	I0717 19:17:07.751715  320967 retry.go:31] will retry after 1.488312422s: couldn't verify container is exited. %!v(MISSING): unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:07.897716  319694 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-677764 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 19:17:07.913972  319694 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0717 19:17:07.917570  319694 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:17:07.930480  319694 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime crio
	I0717 19:17:07.930556  319694 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:17:07.970335  319694 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.27.3". assuming images are not preloaded.
	I0717 19:17:07.970401  319694 ssh_runner.go:195] Run: which lz4
	I0717 19:17:07.973785  319694 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0717 19:17:07.976894  319694 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 19:17:07.976930  319694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (437094661 bytes)
	I0717 19:17:08.876790  319694 crio.go:444] Took 0.903039 seconds to copy over tarball
	I0717 19:17:08.876869  319694 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 19:17:10.923864  319694 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.046958184s)
	I0717 19:17:10.923891  319694 crio.go:451] Took 2.047072 seconds to extract the tarball
	I0717 19:17:10.923900  319694 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 19:17:10.995654  319694 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:17:11.033449  319694 crio.go:496] all images are preloaded for cri-o runtime.
	I0717 19:17:11.033470  319694 cache_images.go:84] Images are preloaded, skipping loading
	I0717 19:17:11.033536  319694 ssh_runner.go:195] Run: crio config
	I0717 19:17:11.076475  319694 cni.go:84] Creating CNI manager for ""
	I0717 19:17:11.076498  319694 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 19:17:11.076516  319694 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 19:17:11.076533  319694 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-677764 NodeName:kubernetes-upgrade-677764 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 19:17:11.076678  319694 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-677764"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 19:17:11.076739  319694 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-677764 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:kubernetes-upgrade-677764 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 19:17:11.076789  319694 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 19:17:11.085102  319694 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 19:17:11.085179  319694 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 19:17:11.093048  319694 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (435 bytes)
	I0717 19:17:11.108937  319694 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 19:17:11.124570  319694 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0717 19:17:11.140125  319694 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0717 19:17:11.143262  319694 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:17:11.153113  319694 certs.go:56] Setting up /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/kubernetes-upgrade-677764 for IP: 192.168.85.2
	I0717 19:17:11.153145  319694 certs.go:190] acquiring lock for shared ca certs: {Name:mk42196ce59710ebf500640671660e2f4656c84e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:17:11.153292  319694 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16890-138069/.minikube/ca.key
	I0717 19:17:11.153357  319694 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16890-138069/.minikube/proxy-client-ca.key
	I0717 19:17:11.153465  319694 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/kubernetes-upgrade-677764/client.key
	I0717 19:17:11.153534  319694 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/kubernetes-upgrade-677764/apiserver.key.43b9df8c
	I0717 19:17:11.153592  319694 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/kubernetes-upgrade-677764/proxy-client.key
	I0717 19:17:11.153723  319694 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/144822.pem (1338 bytes)
	W0717 19:17:11.153767  319694 certs.go:433] ignoring /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/144822_empty.pem, impossibly tiny 0 bytes
	I0717 19:17:11.153786  319694 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 19:17:11.153819  319694 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem (1078 bytes)
	I0717 19:17:11.153854  319694 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem (1123 bytes)
	I0717 19:17:11.153884  319694 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/home/jenkins/minikube-integration/16890-138069/.minikube/certs/key.pem (1675 bytes)
	I0717 19:17:11.153945  319694 certs.go:437] found cert: /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem (1708 bytes)
	I0717 19:17:11.154698  319694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/kubernetes-upgrade-677764/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 19:17:11.176452  319694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/kubernetes-upgrade-677764/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 19:17:11.197450  319694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/kubernetes-upgrade-677764/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 19:17:11.219208  319694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/kubernetes-upgrade-677764/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 19:17:11.240801  319694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 19:17:11.262498  319694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0717 19:17:11.284551  319694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 19:17:11.306440  319694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 19:17:11.328116  319694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/certs/144822.pem --> /usr/share/ca-certificates/144822.pem (1338 bytes)
	I0717 19:17:11.349097  319694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem --> /usr/share/ca-certificates/1448222.pem (1708 bytes)
	I0717 19:17:11.370305  319694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 19:17:11.391989  319694 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 19:17:11.409683  319694 ssh_runner.go:195] Run: openssl version
	I0717 19:17:11.414907  319694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/144822.pem && ln -fs /usr/share/ca-certificates/144822.pem /etc/ssl/certs/144822.pem"
	I0717 19:17:11.423323  319694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/144822.pem
	I0717 19:17:11.426616  319694 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 18:51 /usr/share/ca-certificates/144822.pem
	I0717 19:17:11.426664  319694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/144822.pem
	I0717 19:17:11.433278  319694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/144822.pem /etc/ssl/certs/51391683.0"
	I0717 19:17:11.441142  319694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1448222.pem && ln -fs /usr/share/ca-certificates/1448222.pem /etc/ssl/certs/1448222.pem"
	I0717 19:17:11.449686  319694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1448222.pem
	I0717 19:17:11.453061  319694 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 18:51 /usr/share/ca-certificates/1448222.pem
	I0717 19:17:11.453118  319694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1448222.pem
	I0717 19:17:11.459228  319694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1448222.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 19:17:11.466968  319694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 19:17:11.475233  319694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:17:11.478394  319694 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 18:46 /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:17:11.478445  319694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 19:17:11.484541  319694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 19:17:11.492555  319694 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 19:17:11.495732  319694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 19:17:11.502135  319694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 19:17:11.508481  319694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 19:17:11.514577  319694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 19:17:11.520744  319694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 19:17:11.526774  319694 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 19:17:11.533715  319694 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-677764 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:kubernetes-upgrade-677764 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 19:17:11.533824  319694 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0717 19:17:11.533871  319694 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:17:11.567652  319694 cri.go:89] found id: ""
	I0717 19:17:11.567724  319694 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 19:17:11.575934  319694 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 19:17:11.575958  319694 kubeadm.go:636] restartCluster start
	I0717 19:17:11.576033  319694 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 19:17:11.583557  319694 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:11.584390  319694 kubeconfig.go:135] verify returned: extract IP: "kubernetes-upgrade-677764" does not appear in /home/jenkins/minikube-integration/16890-138069/kubeconfig
	I0717 19:17:11.584810  319694 kubeconfig.go:146] "kubernetes-upgrade-677764" context is missing from /home/jenkins/minikube-integration/16890-138069/kubeconfig - will repair!
	I0717 19:17:11.585467  319694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-138069/kubeconfig: {Name:mkc53c034e0e90a78d013670a58d5882070a3e3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:17:11.586395  319694 kapi.go:59] client config for kubernetes-upgrade-677764: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/profiles/kubernetes-upgrade-677764/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/profiles/kubernetes-upgrade-677764/client.key", CAFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 19:17:11.587144  319694 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 19:17:11.595173  319694 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2023-07-17 19:16:29.700850359 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2023-07-17 19:17:11.135848458 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.85.2
	@@ -11,13 +11,13 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/crio/crio.sock
	+  criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-677764"
	   kubeletExtraArgs:
	     node-ip: 192.168.85.2
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	@@ -31,16 +31,14 @@
	   extraArgs:
	     leader-elect: "false"
	 certificatesDir: /var/lib/minikube/certs
	-clusterName: kubernetes-upgrade-677764
	+clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	-dns:
	-  type: CoreDNS
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	-      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.85.2:2381
	-kubernetesVersion: v1.16.0
	+      proxy-refresh-interval: "70000"
	+kubernetesVersion: v1.27.3
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I0717 19:17:11.595193  319694 kubeadm.go:1128] stopping kube-system containers ...
	I0717 19:17:11.595206  319694 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:17:11.595251  319694 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:17:11.629743  319694 cri.go:89] found id: ""
	I0717 19:17:11.629813  319694 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:17:11.640757  319694 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:17:11.648487  319694 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5703 Jul 17 19:16 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5743 Jul 17 19:16 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5823 Jul 17 19:16 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5687 Jul 17 19:16 /etc/kubernetes/scheduler.conf
	
	I0717 19:17:11.648545  319694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:17:11.656096  319694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:17:11.663849  319694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:17:11.671654  319694 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:17:11.679410  319694 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:17:11.687153  319694 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 19:17:11.687174  319694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:17:11.735063  319694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:17:07.592253  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:07.592317  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:07.602874  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:08.091426  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:08.091520  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:08.103609  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:08.592219  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:08.592301  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:08.606487  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:09.092214  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:09.092291  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:09.102918  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:09.591408  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:09.591498  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:09.602029  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:10.091629  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:10.091723  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:10.102016  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:10.591554  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:10.591677  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:10.601759  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:11.092357  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:11.092435  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:11.101989  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:11.591428  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:11.591509  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:11.601562  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:12.092211  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:12.092316  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:12.102546  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:09.240615  320967 cli_runner.go:164] Run: docker container inspect missing-upgrade-629154 --format={{.State.Status}}
	W0717 19:17:09.260343  320967 cli_runner.go:211] docker container inspect missing-upgrade-629154 --format={{.State.Status}} returned with exit code 1
	I0717 19:17:09.260427  320967 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:09.260448  320967 oci.go:661] temporary error: container missing-upgrade-629154 status is  but expect it to be exited
	I0717 19:17:09.260524  320967 retry.go:31] will retry after 2.925283312s: couldn't verify container is exited. %!v(MISSING): unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:12.188659  320967 cli_runner.go:164] Run: docker container inspect missing-upgrade-629154 --format={{.State.Status}}
	W0717 19:17:12.204822  320967 cli_runner.go:211] docker container inspect missing-upgrade-629154 --format={{.State.Status}} returned with exit code 1
	I0717 19:17:12.204899  320967 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:12.204916  320967 oci.go:661] temporary error: container missing-upgrade-629154 status is  but expect it to be exited
	I0717 19:17:12.204941  320967 retry.go:31] will retry after 2.348489928s: couldn't verify container is exited. %!v(MISSING): unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:12.260088  319694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:17:12.384271  319694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:17:12.436569  319694 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:17:12.562426  319694 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:17:12.562489  319694 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:17:13.073206  319694 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:17:13.573875  319694 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:17:13.585224  319694 api_server.go:72] duration metric: took 1.022794969s to wait for apiserver process to appear ...
	I0717 19:17:13.585251  319694 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:17:13.585273  319694 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0717 19:17:12.592341  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:12.592414  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:12.602282  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:13.091814  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:13.091898  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:13.102925  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:13.591410  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:13.591497  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:13.601902  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:14.092039  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:14.092143  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:14.102651  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:14.591810  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:14.591897  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:14.601989  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:15.091535  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:15.091626  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:15.101734  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:15.591299  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:15.591383  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:15.601527  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:16.092099  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:16.092203  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:16.102156  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:16.591684  318888 api_server.go:166] Checking apiserver status ...
	I0717 19:17:16.591790  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 19:17:16.601816  318888 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:17.082386  318888 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 19:17:17.082436  318888 kubeadm.go:1128] stopping kube-system containers ...
	I0717 19:17:17.082451  318888 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0717 19:17:17.082517  318888 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0717 19:17:17.118915  318888 cri.go:89] found id: "ab7184693b8535872a6449bd84279882db6966e0d108be297584389fcbd446cd"
	I0717 19:17:17.118943  318888 cri.go:89] found id: "3c4ecc96bf0b93221b1cd41bd5f1526256df3f33b13568ef0d2794d1eb83eca3"
	I0717 19:17:17.118951  318888 cri.go:89] found id: "d061dfba20f3f98ca0245fe20749ce94924338455dffdddfee06f3279e0c6ff8"
	I0717 19:17:17.118957  318888 cri.go:89] found id: "f6dec920e96cf327838fec0d74079f5434ac14535a8435b632e013e75fdb381f"
	I0717 19:17:17.118963  318888 cri.go:89] found id: "883de01fb9bb917f2b2c7d622bacd82c691a1416de9b63dd431f017be84c6eee"
	I0717 19:17:17.118969  318888 cri.go:89] found id: "e10be0e9af17f8987e24e478a481f2c0799874fb1457f4c96c792fa327c1e1fe"
	I0717 19:17:17.118976  318888 cri.go:89] found id: "6c2d784dbdd1891bc8ee8488658197313688eb7036abdb4530230dcf2eccb3fa"
	I0717 19:17:17.118981  318888 cri.go:89] found id: "249f2d6748858a003aca4fb5b25038d813f220e6af0a35223e06266835417555"
	I0717 19:17:17.118985  318888 cri.go:89] found id: ""
	I0717 19:17:17.118990  318888 cri.go:234] Stopping containers: [ab7184693b8535872a6449bd84279882db6966e0d108be297584389fcbd446cd 3c4ecc96bf0b93221b1cd41bd5f1526256df3f33b13568ef0d2794d1eb83eca3 d061dfba20f3f98ca0245fe20749ce94924338455dffdddfee06f3279e0c6ff8 f6dec920e96cf327838fec0d74079f5434ac14535a8435b632e013e75fdb381f 883de01fb9bb917f2b2c7d622bacd82c691a1416de9b63dd431f017be84c6eee e10be0e9af17f8987e24e478a481f2c0799874fb1457f4c96c792fa327c1e1fe 6c2d784dbdd1891bc8ee8488658197313688eb7036abdb4530230dcf2eccb3fa 249f2d6748858a003aca4fb5b25038d813f220e6af0a35223e06266835417555]
	I0717 19:17:17.119041  318888 ssh_runner.go:195] Run: which crictl
	I0717 19:17:17.122491  318888 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 ab7184693b8535872a6449bd84279882db6966e0d108be297584389fcbd446cd 3c4ecc96bf0b93221b1cd41bd5f1526256df3f33b13568ef0d2794d1eb83eca3 d061dfba20f3f98ca0245fe20749ce94924338455dffdddfee06f3279e0c6ff8 f6dec920e96cf327838fec0d74079f5434ac14535a8435b632e013e75fdb381f 883de01fb9bb917f2b2c7d622bacd82c691a1416de9b63dd431f017be84c6eee e10be0e9af17f8987e24e478a481f2c0799874fb1457f4c96c792fa327c1e1fe 6c2d784dbdd1891bc8ee8488658197313688eb7036abdb4530230dcf2eccb3fa 249f2d6748858a003aca4fb5b25038d813f220e6af0a35223e06266835417555
	I0717 19:17:14.554577  320967 cli_runner.go:164] Run: docker container inspect missing-upgrade-629154 --format={{.State.Status}}
	W0717 19:17:14.571476  320967 cli_runner.go:211] docker container inspect missing-upgrade-629154 --format={{.State.Status}} returned with exit code 1
	I0717 19:17:14.571547  320967 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:14.571564  320967 oci.go:661] temporary error: container missing-upgrade-629154 status is  but expect it to be exited
	I0717 19:17:14.571591  320967 retry.go:31] will retry after 3.344538832s: couldn't verify container is exited. %!v(MISSING): unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:17.916378  320967 cli_runner.go:164] Run: docker container inspect missing-upgrade-629154 --format={{.State.Status}}
	W0717 19:17:17.935179  320967 cli_runner.go:211] docker container inspect missing-upgrade-629154 --format={{.State.Status}} returned with exit code 1
	I0717 19:17:17.935270  320967 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	I0717 19:17:17.935292  320967 oci.go:661] temporary error: container missing-upgrade-629154 status is  but expect it to be exited
	I0717 19:17:17.935336  320967 oci.go:88] couldn't shut down missing-upgrade-629154 (might be okay): verify shutdown: couldn't verify container is exited. %!v(MISSING): unknown state "missing-upgrade-629154": docker container inspect missing-upgrade-629154 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-629154
	 
	I0717 19:17:17.935395  320967 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-629154
	I0717 19:17:17.953470  320967 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-629154
	W0717 19:17:17.973573  320967 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-629154 returned with exit code 1
	I0717 19:17:17.973680  320967 cli_runner.go:164] Run: docker network inspect  --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 19:17:17.992563  320967 cli_runner.go:211] docker network inspect  --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 19:17:17.992659  320967 network_create.go:281] running [docker network inspect ] to gather additional debugging logs...
	I0717 19:17:17.992690  320967 cli_runner.go:164] Run: docker network inspect 
	W0717 19:17:18.009786  320967 cli_runner.go:211] docker network inspect  returned with exit code 1
	I0717 19:17:18.009826  320967 network_create.go:284] error running [docker network inspect ]: docker network inspect : exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: 
	I0717 19:17:18.009839  320967 network_create.go:286] output of [docker network inspect ]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: 
	
	** /stderr **
	I0717 19:17:18.010035  320967 fix.go:114] Sleeping 1 second for extra luck!
	I0717 19:17:19.010169  320967 start.go:125] createHost starting for "m01" (driver="docker")
	I0717 19:17:19.012954  320967 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0717 19:17:19.013154  320967 start.go:159] libmachine.API.Create for "missing-upgrade-629154" (driver="docker")
	I0717 19:17:19.013189  320967 client.go:168] LocalClient.Create starting
	I0717 19:17:19.013296  320967 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem
	I0717 19:17:19.013340  320967 main.go:141] libmachine: Decoding PEM data...
	I0717 19:17:19.013361  320967 main.go:141] libmachine: Parsing certificate...
	I0717 19:17:19.013432  320967 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem
	I0717 19:17:19.013454  320967 main.go:141] libmachine: Decoding PEM data...
	I0717 19:17:19.013468  320967 main.go:141] libmachine: Parsing certificate...
	I0717 19:17:19.014386  320967 cli_runner.go:164] Run: docker network inspect missing-upgrade-629154 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 19:17:19.031228  320967 cli_runner.go:211] docker network inspect missing-upgrade-629154 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 19:17:19.031307  320967 network_create.go:281] running [docker network inspect missing-upgrade-629154] to gather additional debugging logs...
	I0717 19:17:19.031327  320967 cli_runner.go:164] Run: docker network inspect missing-upgrade-629154
	W0717 19:17:19.047202  320967 cli_runner.go:211] docker network inspect missing-upgrade-629154 returned with exit code 1
	I0717 19:17:19.047245  320967 network_create.go:284] error running [docker network inspect missing-upgrade-629154]: docker network inspect missing-upgrade-629154: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-629154 not found
	I0717 19:17:19.047260  320967 network_create.go:286] output of [docker network inspect missing-upgrade-629154]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-629154 not found
	
	** /stderr **
	I0717 19:17:19.047324  320967 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 19:17:19.064674  320967 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-1070ebc8dfdf IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:9e:80:fb:8c} reservation:<nil>}
	I0717 19:17:19.065491  320967 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-743d16d82889 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:b6:c0:17:7b} reservation:<nil>}
	I0717 19:17:19.066074  320967 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-61bb7c620e40 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:de:71:25:d5} reservation:<nil>}
	I0717 19:17:19.066872  320967 network.go:214] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-daa1021b57a1 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:ac:ec:89:33} reservation:<nil>}
	I0717 19:17:19.067732  320967 network.go:214] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-75d7f2c6b3bf IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:07:a6:ac:63} reservation:<nil>}
	I0717 19:17:19.068803  320967 network.go:209] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00140ab20}
	I0717 19:17:19.068835  320967 network_create.go:123] attempt to create docker network missing-upgrade-629154 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I0717 19:17:19.068903  320967 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-629154 missing-upgrade-629154
	I0717 19:17:19.127380  320967 network_create.go:107] docker network missing-upgrade-629154 192.168.94.0/24 created
	I0717 19:17:19.127417  320967 kic.go:117] calculated static IP "192.168.94.2" for the "missing-upgrade-629154" container
	I0717 19:17:19.127480  320967 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 19:17:18.586971  319694 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 19:17:19.087815  319694 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0717 19:17:17.531585  318888 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 19:17:17.626924  318888 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 19:17:17.636069  318888 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jul 17 19:15 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jul 17 19:16 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Jul 17 19:16 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jul 17 19:16 /etc/kubernetes/scheduler.conf
	
	I0717 19:17:17.636156  318888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 19:17:17.644854  318888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 19:17:17.653576  318888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 19:17:17.662019  318888 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:17.662095  318888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 19:17:17.670391  318888 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 19:17:17.679253  318888 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0717 19:17:17.679334  318888 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 19:17:17.687631  318888 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 19:17:17.696369  318888 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 19:17:17.696393  318888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:17:17.748307  318888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:17:18.632331  318888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:17:18.797010  318888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:17:18.853832  318888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:17:18.986878  318888 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:17:18.986960  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:17:19.498103  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:17:19.997578  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:17:20.497612  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:17:20.509814  318888 api_server.go:72] duration metric: took 1.522935408s to wait for apiserver process to appear ...
	I0717 19:17:20.509839  318888 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:17:20.509859  318888 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0717 19:17:22.411960  318888 api_server.go:279] https://192.168.67.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 19:17:22.412022  318888 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 19:17:22.912688  318888 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0717 19:17:22.918471  318888 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:17:22.918506  318888 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:17:23.413158  318888 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0717 19:17:23.418644  318888 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 19:17:23.418672  318888 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 19:17:23.912182  318888 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0717 19:17:23.917834  318888 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0717 19:17:23.926485  318888 api_server.go:141] control plane version: v1.27.3
	I0717 19:17:23.926517  318888 api_server.go:131] duration metric: took 3.416671828s to wait for apiserver health ...
	I0717 19:17:23.926528  318888 cni.go:84] Creating CNI manager for ""
	I0717 19:17:23.926537  318888 cni.go:149] "docker" driver + "crio" runtime found, recommending kindnet
	I0717 19:17:23.929204  318888 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 19:17:19.144073  320967 cli_runner.go:164] Run: docker volume create missing-upgrade-629154 --label name.minikube.sigs.k8s.io=missing-upgrade-629154 --label created_by.minikube.sigs.k8s.io=true
	I0717 19:17:19.160133  320967 oci.go:103] Successfully created a docker volume missing-upgrade-629154
	I0717 19:17:19.160242  320967 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-629154-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-629154 --entrypoint /usr/bin/test -v missing-upgrade-629154:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0717 19:17:22.070001  320967 cli_runner.go:217] Completed: docker run --rm --name missing-upgrade-629154-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-629154 --entrypoint /usr/bin/test -v missing-upgrade-629154:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib: (2.909707016s)
	I0717 19:17:22.070032  320967 oci.go:107] Successfully prepared a docker volume missing-upgrade-629154
	I0717 19:17:22.070049  320967 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	W0717 19:17:22.070172  320967 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0717 19:17:22.070264  320967 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 19:17:22.129147  320967 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-629154 --name missing-upgrade-629154 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-629154 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-629154 --network missing-upgrade-629154 --ip 192.168.94.2 --volume missing-upgrade-629154:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 19:17:22.460589  320967 cli_runner.go:164] Run: docker container inspect missing-upgrade-629154 --format={{.State.Running}}
	I0717 19:17:22.489885  320967 cli_runner.go:164] Run: docker container inspect missing-upgrade-629154 --format={{.State.Status}}
	I0717 19:17:22.515709  320967 cli_runner.go:164] Run: docker exec missing-upgrade-629154 stat /var/lib/dpkg/alternatives/iptables
	I0717 19:17:22.590040  320967 oci.go:144] the created container "missing-upgrade-629154" has a running status.
	I0717 19:17:22.590075  320967 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/16890-138069/.minikube/machines/missing-upgrade-629154/id_rsa...
	I0717 19:17:22.751433  320967 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/16890-138069/.minikube/machines/missing-upgrade-629154/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 19:17:22.773136  320967 cli_runner.go:164] Run: docker container inspect missing-upgrade-629154 --format={{.State.Status}}
	I0717 19:17:22.792719  320967 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 19:17:22.792750  320967 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-629154 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 19:17:22.865866  320967 cli_runner.go:164] Run: docker container inspect missing-upgrade-629154 --format={{.State.Status}}
	I0717 19:17:22.887452  320967 machine.go:88] provisioning docker machine ...
	I0717 19:17:22.887490  320967 ubuntu.go:169] provisioning hostname "missing-upgrade-629154"
	I0717 19:17:22.887568  320967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-629154
	I0717 19:17:22.905358  320967 main.go:141] libmachine: Using SSH client type: native
	I0717 19:17:22.905977  320967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32979 <nil> <nil>}
	I0717 19:17:22.905994  320967 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-629154 && echo "missing-upgrade-629154" | sudo tee /etc/hostname
	I0717 19:17:22.906764  320967 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45632->127.0.0.1:32979: read: connection reset by peer
	I0717 19:17:24.088536  319694 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 19:17:24.088584  319694 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0717 19:17:23.930742  318888 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 19:17:23.934475  318888 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0717 19:17:23.934493  318888 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 19:17:23.950602  318888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 19:17:24.599328  318888 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:17:24.606223  318888 system_pods.go:59] 7 kube-system pods found
	I0717 19:17:24.606260  318888 system_pods.go:61] "coredns-5d78c9869d-7bhk2" [113dbc11-1279-4188-b57f-ef1a7476354e] Running
	I0717 19:17:24.606270  318888 system_pods.go:61] "etcd-pause-795576" [cb60766e-050b-459f-ab27-b4eb96c1cfb1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 19:17:24.606283  318888 system_pods.go:61] "kindnet-blwth" [7367b120-9ad2-48ef-a098-f9427cd70ce7] Running
	I0717 19:17:24.606295  318888 system_pods.go:61] "kube-apiserver-pause-795576" [deacff2a-f4f5-4573-985b-f50aec648951] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 19:17:24.606305  318888 system_pods.go:61] "kube-controller-manager-pause-795576" [7fe105ea-5ec8-4082-8c94-109c5613c844] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 19:17:24.606312  318888 system_pods.go:61] "kube-proxy-vcv28" [543aec10-6af6-4088-941a-d684da877b3f] Running
	I0717 19:17:24.606330  318888 system_pods.go:61] "kube-scheduler-pause-795576" [282169f5-c63d-4d71-9dd5-180ca707ac61] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 19:17:24.606337  318888 system_pods.go:74] duration metric: took 6.98622ms to wait for pod list to return data ...
	I0717 19:17:24.606346  318888 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:17:24.609591  318888 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0717 19:17:24.609618  318888 node_conditions.go:123] node cpu capacity is 8
	I0717 19:17:24.609627  318888 node_conditions.go:105] duration metric: took 3.276797ms to run NodePressure ...
	I0717 19:17:24.609647  318888 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 19:17:24.829854  318888 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 19:17:24.834970  318888 kubeadm.go:787] kubelet initialised
	I0717 19:17:24.834992  318888 kubeadm.go:788] duration metric: took 5.114607ms waiting for restarted kubelet to initialise ...
	I0717 19:17:24.835001  318888 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:17:24.840370  318888 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-7bhk2" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:24.845574  318888 pod_ready.go:92] pod "coredns-5d78c9869d-7bhk2" in "kube-system" namespace has status "Ready":"True"
	I0717 19:17:24.845597  318888 pod_ready.go:81] duration metric: took 5.201567ms waiting for pod "coredns-5d78c9869d-7bhk2" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:24.845608  318888 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:26.856893  318888 pod_ready.go:102] pod "etcd-pause-795576" in "kube-system" namespace has status "Ready":"False"
	I0717 19:17:26.047863  320967 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-629154
	
	I0717 19:17:26.048003  320967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-629154
	I0717 19:17:26.066175  320967 main.go:141] libmachine: Using SSH client type: native
	I0717 19:17:26.066619  320967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32979 <nil> <nil>}
	I0717 19:17:26.066642  320967 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-629154' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-629154/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-629154' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:17:26.192305  320967 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:17:26.192337  320967 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16890-138069/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-138069/.minikube}
	I0717 19:17:26.192357  320967 ubuntu.go:177] setting up certificates
	I0717 19:17:26.192366  320967 provision.go:83] configureAuth start
	I0717 19:17:26.192418  320967 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-629154
	I0717 19:17:26.209346  320967 provision.go:138] copyHostCerts
	I0717 19:17:26.209408  320967 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-138069/.minikube/ca.pem, removing ...
	I0717 19:17:26.209416  320967 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-138069/.minikube/ca.pem
	I0717 19:17:26.209481  320967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-138069/.minikube/ca.pem (1078 bytes)
	I0717 19:17:26.209565  320967 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-138069/.minikube/cert.pem, removing ...
	I0717 19:17:26.209573  320967 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-138069/.minikube/cert.pem
	I0717 19:17:26.209595  320967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-138069/.minikube/cert.pem (1123 bytes)
	I0717 19:17:26.209653  320967 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-138069/.minikube/key.pem, removing ...
	I0717 19:17:26.209661  320967 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-138069/.minikube/key.pem
	I0717 19:17:26.209682  320967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-138069/.minikube/key.pem (1675 bytes)
	I0717 19:17:26.209729  320967 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-629154 san=[192.168.94.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-629154]
	I0717 19:17:26.391286  320967 provision.go:172] copyRemoteCerts
	I0717 19:17:26.391347  320967 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:17:26.391387  320967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-629154
	I0717 19:17:26.409619  320967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32979 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/missing-upgrade-629154/id_rsa Username:docker}
	I0717 19:17:26.501111  320967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:17:26.524306  320967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 19:17:26.547309  320967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:17:26.569596  320967 provision.go:86] duration metric: configureAuth took 377.215595ms
	I0717 19:17:26.569626  320967 ubuntu.go:193] setting minikube options for container-runtime
	I0717 19:17:26.569809  320967 config.go:182] Loaded profile config "missing-upgrade-629154": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0717 19:17:26.569915  320967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-629154
	I0717 19:17:26.587252  320967 main.go:141] libmachine: Using SSH client type: native
	I0717 19:17:26.587695  320967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32979 <nil> <nil>}
	I0717 19:17:26.587716  320967 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:17:27.016114  320967 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:17:27.016149  320967 machine.go:91] provisioned docker machine in 4.128673908s
	I0717 19:17:27.016159  320967 client.go:171] LocalClient.Create took 8.002964436s
	I0717 19:17:27.016178  320967 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-629154" took 8.00302511s
	I0717 19:17:27.016187  320967 start.go:300] post-start starting for "missing-upgrade-629154" (driver="docker")
	I0717 19:17:27.016200  320967 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:17:27.016260  320967 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:17:27.016297  320967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-629154
	I0717 19:17:27.033706  320967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32979 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/missing-upgrade-629154/id_rsa Username:docker}
	I0717 19:17:27.125336  320967 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:17:27.128735  320967 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 19:17:27.128773  320967 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 19:17:27.128787  320967 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 19:17:27.128796  320967 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 19:17:27.128808  320967 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-138069/.minikube/addons for local assets ...
	I0717 19:17:27.128868  320967 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-138069/.minikube/files for local assets ...
	I0717 19:17:27.128976  320967 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem -> 1448222.pem in /etc/ssl/certs
	I0717 19:17:27.129095  320967 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:17:27.137171  320967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem --> /etc/ssl/certs/1448222.pem (1708 bytes)
	I0717 19:17:27.159571  320967 start.go:303] post-start completed in 143.365725ms
	I0717 19:17:27.159943  320967 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-629154
	I0717 19:17:27.177373  320967 profile.go:148] Saving config to /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/missing-upgrade-629154/config.json ...
	I0717 19:17:27.177620  320967 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 19:17:27.177667  320967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-629154
	I0717 19:17:27.194051  320967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32979 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/missing-upgrade-629154/id_rsa Username:docker}
	I0717 19:17:27.281252  320967 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 19:17:27.285719  320967 start.go:128] duration metric: createHost completed in 8.275511714s
	I0717 19:17:27.285823  320967 cli_runner.go:164] Run: docker container inspect missing-upgrade-629154 --format={{.State.Status}}
	W0717 19:17:27.304721  320967 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 19:17:27.304759  320967 machine.go:88] provisioning docker machine ...
	I0717 19:17:27.304795  320967 ubuntu.go:169] provisioning hostname "missing-upgrade-629154"
	I0717 19:17:27.304854  320967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-629154
	I0717 19:17:27.323303  320967 main.go:141] libmachine: Using SSH client type: native
	I0717 19:17:27.323762  320967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32979 <nil> <nil>}
	I0717 19:17:27.323780  320967 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-629154 && echo "missing-upgrade-629154" | sudo tee /etc/hostname
	I0717 19:17:27.463320  320967 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-629154
	
	I0717 19:17:27.463427  320967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-629154
	I0717 19:17:27.480910  320967 main.go:141] libmachine: Using SSH client type: native
	I0717 19:17:27.481322  320967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32979 <nil> <nil>}
	I0717 19:17:27.481340  320967 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-629154' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-629154/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-629154' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 19:17:27.608434  320967 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 19:17:27.608471  320967 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/16890-138069/.minikube CaCertPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16890-138069/.minikube}
	I0717 19:17:27.608507  320967 ubuntu.go:177] setting up certificates
	I0717 19:17:27.608519  320967 provision.go:83] configureAuth start
	I0717 19:17:27.608583  320967 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-629154
	I0717 19:17:27.626713  320967 provision.go:138] copyHostCerts
	I0717 19:17:27.626805  320967 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-138069/.minikube/ca.pem, removing ...
	I0717 19:17:27.626822  320967 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-138069/.minikube/ca.pem
	I0717 19:17:27.626895  320967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16890-138069/.minikube/ca.pem (1078 bytes)
	I0717 19:17:27.627011  320967 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-138069/.minikube/cert.pem, removing ...
	I0717 19:17:27.627024  320967 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-138069/.minikube/cert.pem
	I0717 19:17:27.627053  320967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16890-138069/.minikube/cert.pem (1123 bytes)
	I0717 19:17:27.627124  320967 exec_runner.go:144] found /home/jenkins/minikube-integration/16890-138069/.minikube/key.pem, removing ...
	I0717 19:17:27.627135  320967 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16890-138069/.minikube/key.pem
	I0717 19:17:27.627160  320967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16890-138069/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16890-138069/.minikube/key.pem (1675 bytes)
	I0717 19:17:27.627236  320967 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-629154 san=[192.168.94.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-629154]
	I0717 19:17:27.714471  320967 provision.go:172] copyRemoteCerts
	I0717 19:17:27.714534  320967 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 19:17:27.714586  320967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-629154
	I0717 19:17:27.732023  320967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32979 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/missing-upgrade-629154/id_rsa Username:docker}
	I0717 19:17:27.829082  320967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 19:17:27.851344  320967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 19:17:27.874597  320967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 19:17:27.897224  320967 provision.go:86] duration metric: configureAuth took 288.686927ms
	I0717 19:17:27.897251  320967 ubuntu.go:193] setting minikube options for container-runtime
	I0717 19:17:27.897418  320967 config.go:182] Loaded profile config "missing-upgrade-629154": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0717 19:17:27.897513  320967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-629154
	I0717 19:17:27.914517  320967 main.go:141] libmachine: Using SSH client type: native
	I0717 19:17:27.914920  320967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80eba0] 0x811c40 <nil>  [] 0s} 127.0.0.1 32979 <nil> <nil>}
	I0717 19:17:27.914937  320967 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0717 19:17:28.171362  320967 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0717 19:17:28.171397  320967 machine.go:91] provisioned docker machine in 866.624028ms
	I0717 19:17:28.171410  320967 start.go:300] post-start starting for "missing-upgrade-629154" (driver="docker")
	I0717 19:17:28.171423  320967 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 19:17:28.171484  320967 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 19:17:28.171529  320967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-629154
	I0717 19:17:28.188632  320967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32979 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/missing-upgrade-629154/id_rsa Username:docker}
	I0717 19:17:28.281181  320967 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 19:17:28.284431  320967 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 19:17:28.284485  320967 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 19:17:28.284497  320967 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 19:17:28.284508  320967 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 19:17:28.284520  320967 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-138069/.minikube/addons for local assets ...
	I0717 19:17:28.284586  320967 filesync.go:126] Scanning /home/jenkins/minikube-integration/16890-138069/.minikube/files for local assets ...
	I0717 19:17:28.284668  320967 filesync.go:149] local asset: /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem -> 1448222.pem in /etc/ssl/certs
	I0717 19:17:28.284769  320967 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 19:17:28.293058  320967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/ssl/certs/1448222.pem --> /etc/ssl/certs/1448222.pem (1708 bytes)
	I0717 19:17:28.315290  320967 start.go:303] post-start completed in 143.863909ms
	I0717 19:17:28.315362  320967 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 19:17:28.315405  320967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-629154
	I0717 19:17:28.332422  320967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32979 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/missing-upgrade-629154/id_rsa Username:docker}
	I0717 19:17:28.420734  320967 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 19:17:28.424810  320967 fix.go:56] fixHost completed within 24.070138706s
	I0717 19:17:28.424837  320967 start.go:83] releasing machines lock for "missing-upgrade-629154", held for 24.070192183s
	I0717 19:17:28.424924  320967 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-629154
	I0717 19:17:28.441058  320967 ssh_runner.go:195] Run: cat /version.json
	I0717 19:17:28.441107  320967 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 19:17:28.441131  320967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-629154
	I0717 19:17:28.441171  320967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-629154
	I0717 19:17:28.458194  320967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32979 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/missing-upgrade-629154/id_rsa Username:docker}
	I0717 19:17:28.459501  320967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32979 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/missing-upgrade-629154/id_rsa Username:docker}
	W0717 19:17:28.636159  320967 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 19:17:28.636258  320967 ssh_runner.go:195] Run: systemctl --version
	I0717 19:17:28.640703  320967 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0717 19:17:28.778708  320967 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 19:17:28.783280  320967 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:17:28.801456  320967 cni.go:227] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0717 19:17:28.801540  320967 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 19:17:28.829717  320967 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0717 19:17:28.829747  320967 start.go:469] detecting cgroup driver to use...
	I0717 19:17:28.829786  320967 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 19:17:28.829837  320967 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0717 19:17:28.843756  320967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0717 19:17:28.854761  320967 docker.go:196] disabling cri-docker service (if available) ...
	I0717 19:17:28.854811  320967 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0717 19:17:28.867747  320967 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0717 19:17:28.881473  320967 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0717 19:17:28.956358  320967 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0717 19:17:29.033396  320967 docker.go:212] disabling docker service ...
	I0717 19:17:29.033460  320967 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0717 19:17:29.051767  320967 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0717 19:17:29.062851  320967 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0717 19:17:29.146285  320967 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0717 19:17:29.229241  320967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0717 19:17:29.239798  320967 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 19:17:29.255456  320967 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0717 19:17:29.255522  320967 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:29.264555  320967 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0717 19:17:29.264627  320967 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:29.274104  320967 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:29.283049  320967 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0717 19:17:29.291836  320967 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 19:17:29.300285  320967 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 19:17:29.308030  320967 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 19:17:29.315680  320967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 19:17:29.388239  320967 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0717 19:17:29.489929  320967 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0717 19:17:29.489996  320967 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0717 19:17:29.493556  320967 start.go:537] Will wait 60s for crictl version
	I0717 19:17:29.493622  320967 ssh_runner.go:195] Run: which crictl
	I0717 19:17:29.497005  320967 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 19:17:29.531465  320967 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0717 19:17:29.531554  320967 ssh_runner.go:195] Run: crio --version
	I0717 19:17:29.566836  320967 ssh_runner.go:195] Run: crio --version
	I0717 19:17:29.603453  320967 out.go:177] * Preparing Kubernetes v1.18.0 on CRI-O 1.24.6 ...
	I0717 19:17:29.605137  320967 cli_runner.go:164] Run: docker network inspect missing-upgrade-629154 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 19:17:29.621320  320967 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0717 19:17:29.625049  320967 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 19:17:29.638463  320967 out.go:177]   - kubeadm.pod-network-cidr=10.244.0.0/16
	I0717 19:17:29.089493  319694 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0717 19:17:29.089536  319694 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0717 19:17:28.856999  318888 pod_ready.go:102] pod "etcd-pause-795576" in "kube-system" namespace has status "Ready":"False"
	I0717 19:17:31.355930  318888 pod_ready.go:92] pod "etcd-pause-795576" in "kube-system" namespace has status "Ready":"True"
	I0717 19:17:31.355952  318888 pod_ready.go:81] duration metric: took 6.510338235s waiting for pod "etcd-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:31.355965  318888 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:29.640069  320967 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0717 19:17:29.640135  320967 ssh_runner.go:195] Run: sudo crictl images --output json
	I0717 19:17:29.677081  320967 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.0". assuming images are not preloaded.
	I0717 19:17:29.677103  320967 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.0 registry.k8s.io/kube-controller-manager:v1.18.0 registry.k8s.io/kube-scheduler:v1.18.0 registry.k8s.io/kube-proxy:v1.18.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 19:17:29.677196  320967 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:17:29.677219  320967 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.0
	I0717 19:17:29.677228  320967 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.0
	I0717 19:17:29.677234  320967 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0717 19:17:29.677249  320967 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0717 19:17:29.677198  320967 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.0
	I0717 19:17:29.677335  320967 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 19:17:29.677198  320967 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.0
	I0717 19:17:29.678391  320967 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.0
	I0717 19:17:29.678402  320967 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0717 19:17:29.678453  320967 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0717 19:17:29.678466  320967 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.0
	I0717 19:17:29.678397  320967 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.0
	I0717 19:17:29.678500  320967 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:17:29.678399  320967 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.0
	I0717 19:17:29.678706  320967 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 19:17:29.824846  320967 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.0
	I0717 19:17:29.847646  320967 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0717 19:17:29.851422  320967 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0717 19:17:29.853167  320967 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.0
	I0717 19:17:29.853500  320967 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.0
	I0717 19:17:29.855295  320967 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.0
	I0717 19:17:29.864298  320967 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.0" does not exist at hash "74060cea7f70476f300d9f04fe2c3b3a2e84589e0579382a8df8c82161c3735c" in container runtime
	I0717 19:17:29.864374  320967 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.0
	I0717 19:17:29.864425  320967 ssh_runner.go:195] Run: which crictl
	I0717 19:17:29.868223  320967 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0717 19:17:29.957661  320967 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:17:29.973824  320967 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0717 19:17:29.973876  320967 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0717 19:17:29.973926  320967 ssh_runner.go:195] Run: which crictl
	I0717 19:17:29.982131  320967 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0717 19:17:29.982180  320967 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0717 19:17:29.982222  320967 ssh_runner.go:195] Run: which crictl
	I0717 19:17:30.020947  320967 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.0" needs transfer: "registry.k8s.io/kube-proxy:v1.18.0" does not exist at hash "43940c34f24f39bc9a00b4f9dbcab51a3b28952a7c392c119b877fcb48fe65a3" in container runtime
	I0717 19:17:30.020984  320967 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.0
	I0717 19:17:30.021031  320967 ssh_runner.go:195] Run: which crictl
	I0717 19:17:30.021036  320967 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.0" does not exist at hash "d3e55153f52fb62421dae9ad1a8690a3fd1b30f1b808e50a69a8e7ed5565e72e" in container runtime
	I0717 19:17:30.021080  320967 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.0
	I0717 19:17:30.021121  320967 ssh_runner.go:195] Run: which crictl
	I0717 19:17:30.021137  320967 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.0" does not exist at hash "a31f78c7c8ce146a60cc178c528dd08ca89320f2883e7eb804d7f7b062ed6466" in container runtime
	I0717 19:17:30.021169  320967 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.0
	I0717 19:17:30.021200  320967 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.0
	I0717 19:17:30.021235  320967 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0717 19:17:30.021206  320967 ssh_runner.go:195] Run: which crictl
	I0717 19:17:30.021262  320967 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0717 19:17:30.021294  320967 ssh_runner.go:195] Run: which crictl
	I0717 19:17:30.021311  320967 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0717 19:17:30.021348  320967 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:17:30.021386  320967 ssh_runner.go:195] Run: which crictl
	I0717 19:17:30.024950  320967 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0717 19:17:30.024983  320967 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.0
	I0717 19:17:30.025034  320967 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0717 19:17:30.063920  320967 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.0
	I0717 19:17:30.063920  320967 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.0
	I0717 19:17:30.173386  320967 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0
	I0717 19:17:30.173477  320967 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0717 19:17:30.173555  320967 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.18.0
	I0717 19:17:30.173614  320967 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 19:17:30.175927  320967 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0
	I0717 19:17:30.176051  320967 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.18.0
	I0717 19:17:30.184620  320967 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0717 19:17:30.184692  320967 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0
	I0717 19:17:30.184748  320967 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_1.6.7
	I0717 19:17:30.184777  320967 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.18.0
	I0717 19:17:30.184620  320967 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0717 19:17:30.184844  320967 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.4.3-0
	I0717 19:17:30.186917  320967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 --> /var/lib/minikube/images/kube-apiserver_v1.18.0 (51090432 bytes)
	I0717 19:17:30.187020  320967 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0
	I0717 19:17:30.187127  320967 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.18.0
	I0717 19:17:30.300082  320967 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0717 19:17:30.300102  320967 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0717 19:17:30.300198  320967 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0717 19:17:30.300204  320967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 --> /var/lib/minikube/images/kube-proxy_v1.18.0 (48857088 bytes)
	I0717 19:17:30.300269  320967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 --> /var/lib/minikube/images/coredns_1.6.7 (13600256 bytes)
	I0717 19:17:30.300287  320967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 --> /var/lib/minikube/images/kube-controller-manager_v1.18.0 (49124864 bytes)
	I0717 19:17:30.300198  320967 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.2
	I0717 19:17:30.300356  320967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 --> /var/lib/minikube/images/etcd_3.4.3-0 (100950016 bytes)
	I0717 19:17:30.300408  320967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 --> /var/lib/minikube/images/kube-scheduler_v1.18.0 (34077696 bytes)
	I0717 19:17:30.396695  320967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 --> /var/lib/minikube/images/pause_3.2 (301056 bytes)
	I0717 19:17:30.396727  320967 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0717 19:17:30.396762  320967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0717 19:17:30.487685  320967 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.2
	I0717 19:17:30.487888  320967 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.2
	I0717 19:17:30.681924  320967 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 from cache
	I0717 19:17:30.681962  320967 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0717 19:17:30.682013  320967 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0717 19:17:31.467239  320967 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0717 19:17:31.467283  320967 crio.go:257] Loading image: /var/lib/minikube/images/coredns_1.6.7
	I0717 19:17:31.467328  320967 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_1.6.7
	I0717 19:17:31.804792  320967 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 from cache
	I0717 19:17:31.804838  320967 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.18.0
	I0717 19:17:31.804896  320967 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.18.0
	I0717 19:17:32.946234  320967 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.18.0: (1.141304861s)
	I0717 19:17:32.946265  320967 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 from cache
	I0717 19:17:32.946288  320967 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.18.0
	I0717 19:17:32.946327  320967 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.18.0
	I0717 19:17:33.368050  318888 pod_ready.go:102] pod "kube-apiserver-pause-795576" in "kube-system" namespace has status "Ready":"False"
	I0717 19:17:34.867118  318888 pod_ready.go:92] pod "kube-apiserver-pause-795576" in "kube-system" namespace has status "Ready":"True"
	I0717 19:17:34.867141  318888 pod_ready.go:81] duration metric: took 3.511170042s waiting for pod "kube-apiserver-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:34.867154  318888 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:34.872200  318888 pod_ready.go:92] pod "kube-controller-manager-pause-795576" in "kube-system" namespace has status "Ready":"True"
	I0717 19:17:34.872222  318888 pod_ready.go:81] duration metric: took 5.061874ms waiting for pod "kube-controller-manager-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:34.872234  318888 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vcv28" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:34.876974  318888 pod_ready.go:92] pod "kube-proxy-vcv28" in "kube-system" namespace has status "Ready":"True"
	I0717 19:17:34.876994  318888 pod_ready.go:81] duration metric: took 4.75416ms waiting for pod "kube-proxy-vcv28" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:34.877002  318888 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:34.882008  318888 pod_ready.go:92] pod "kube-scheduler-pause-795576" in "kube-system" namespace has status "Ready":"True"
	I0717 19:17:34.882025  318888 pod_ready.go:81] duration metric: took 5.017488ms waiting for pod "kube-scheduler-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:34.882031  318888 pod_ready.go:38] duration metric: took 10.047022086s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:17:34.882048  318888 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 19:17:34.889459  318888 ops.go:34] apiserver oom_adj: -16
	I0717 19:17:34.889481  318888 kubeadm.go:640] restartCluster took 27.829897508s
	I0717 19:17:34.889489  318888 kubeadm.go:406] StartCluster complete in 27.912159818s
	I0717 19:17:34.889507  318888 settings.go:142] acquiring lock: {Name:mk9765434b8f4871dd605367f6caa71617d51b6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:17:34.889566  318888 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16890-138069/kubeconfig
	I0717 19:17:34.890985  318888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16890-138069/kubeconfig: {Name:mkc53c034e0e90a78d013670a58d5882070a3e3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 19:17:34.891218  318888 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 19:17:34.891367  318888 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0717 19:17:34.893621  318888 out.go:177] * Enabled addons: 
	I0717 19:17:34.891570  318888 config.go:182] Loaded profile config "pause-795576": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:17:34.892386  318888 kapi.go:59] client config for pause-795576: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/profiles/pause-795576/client.crt", KeyFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/profiles/pause-795576/client.key", CAFile:"/home/jenkins/minikube-integration/16890-138069/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19c2d00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 19:17:34.895686  318888 addons.go:502] enable addons completed in 4.319254ms: enabled=[]
	I0717 19:17:34.900065  318888 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-795576" context rescaled to 1 replicas
	I0717 19:17:34.900105  318888 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0717 19:17:34.901715  318888 out.go:177] * Verifying Kubernetes components...
	I0717 19:17:33.801110  319694 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:50572->192.168.85.2:8443: read: connection reset by peer
	I0717 19:17:33.801167  319694 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0717 19:17:33.801586  319694 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0717 19:17:34.088017  319694 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0717 19:17:34.088418  319694 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0717 19:17:34.588032  319694 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0717 19:17:34.588510  319694 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I0717 19:17:35.087132  319694 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0717 19:17:34.903227  318888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:17:34.975028  318888 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0717 19:17:34.975040  318888 node_ready.go:35] waiting up to 6m0s for node "pause-795576" to be "Ready" ...
	I0717 19:17:34.977527  318888 node_ready.go:49] node "pause-795576" has status "Ready":"True"
	I0717 19:17:34.977547  318888 node_ready.go:38] duration metric: took 2.489317ms waiting for node "pause-795576" to be "Ready" ...
	I0717 19:17:34.977557  318888 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:17:34.982915  318888 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-7bhk2" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:35.264001  318888 pod_ready.go:92] pod "coredns-5d78c9869d-7bhk2" in "kube-system" namespace has status "Ready":"True"
	I0717 19:17:35.264029  318888 pod_ready.go:81] duration metric: took 281.084061ms waiting for pod "coredns-5d78c9869d-7bhk2" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:35.264039  318888 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:35.664667  318888 pod_ready.go:92] pod "etcd-pause-795576" in "kube-system" namespace has status "Ready":"True"
	I0717 19:17:35.664695  318888 pod_ready.go:81] duration metric: took 400.647826ms waiting for pod "etcd-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:35.664711  318888 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:36.064629  318888 pod_ready.go:92] pod "kube-apiserver-pause-795576" in "kube-system" namespace has status "Ready":"True"
	I0717 19:17:36.064655  318888 pod_ready.go:81] duration metric: took 399.935907ms waiting for pod "kube-apiserver-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:36.064666  318888 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:36.464603  318888 pod_ready.go:92] pod "kube-controller-manager-pause-795576" in "kube-system" namespace has status "Ready":"True"
	I0717 19:17:36.464628  318888 pod_ready.go:81] duration metric: took 399.955789ms waiting for pod "kube-controller-manager-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:36.464638  318888 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vcv28" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:36.864714  318888 pod_ready.go:92] pod "kube-proxy-vcv28" in "kube-system" namespace has status "Ready":"True"
	I0717 19:17:36.864736  318888 pod_ready.go:81] duration metric: took 400.092782ms waiting for pod "kube-proxy-vcv28" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:36.864745  318888 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:37.264940  318888 pod_ready.go:92] pod "kube-scheduler-pause-795576" in "kube-system" namespace has status "Ready":"True"
	I0717 19:17:37.264967  318888 pod_ready.go:81] duration metric: took 400.214774ms waiting for pod "kube-scheduler-pause-795576" in "kube-system" namespace to be "Ready" ...
	I0717 19:17:37.264981  318888 pod_ready.go:38] duration metric: took 2.287410265s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 19:17:37.265001  318888 api_server.go:52] waiting for apiserver process to appear ...
	I0717 19:17:37.265055  318888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:17:37.276679  318888 api_server.go:72] duration metric: took 2.376534107s to wait for apiserver process to appear ...
	I0717 19:17:37.276709  318888 api_server.go:88] waiting for apiserver healthz status ...
	I0717 19:17:37.276726  318888 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0717 19:17:37.281249  318888 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0717 19:17:37.282295  318888 api_server.go:141] control plane version: v1.27.3
	I0717 19:17:37.282319  318888 api_server.go:131] duration metric: took 5.603456ms to wait for apiserver health ...
	I0717 19:17:37.282329  318888 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 19:17:37.467541  318888 system_pods.go:59] 7 kube-system pods found
	I0717 19:17:37.467573  318888 system_pods.go:61] "coredns-5d78c9869d-7bhk2" [113dbc11-1279-4188-b57f-ef1a7476354e] Running
	I0717 19:17:37.467581  318888 system_pods.go:61] "etcd-pause-795576" [cb60766e-050b-459f-ab27-b4eb96c1cfb1] Running
	I0717 19:17:37.467586  318888 system_pods.go:61] "kindnet-blwth" [7367b120-9ad2-48ef-a098-f9427cd70ce7] Running
	I0717 19:17:37.467592  318888 system_pods.go:61] "kube-apiserver-pause-795576" [deacff2a-f4f5-4573-985b-f50aec648951] Running
	I0717 19:17:37.467597  318888 system_pods.go:61] "kube-controller-manager-pause-795576" [7fe105ea-5ec8-4082-8c94-109c5613c844] Running
	I0717 19:17:37.467603  318888 system_pods.go:61] "kube-proxy-vcv28" [543aec10-6af6-4088-941a-d684da877b3f] Running
	I0717 19:17:37.467608  318888 system_pods.go:61] "kube-scheduler-pause-795576" [282169f5-c63d-4d71-9dd5-180ca707ac61] Running
	I0717 19:17:37.467618  318888 system_pods.go:74] duration metric: took 185.280635ms to wait for pod list to return data ...
	I0717 19:17:37.467628  318888 default_sa.go:34] waiting for default service account to be created ...
	I0717 19:17:37.664184  318888 default_sa.go:45] found service account: "default"
	I0717 19:17:37.664211  318888 default_sa.go:55] duration metric: took 196.57685ms for default service account to be created ...
	I0717 19:17:37.664219  318888 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 19:17:37.867944  318888 system_pods.go:86] 7 kube-system pods found
	I0717 19:17:37.868007  318888 system_pods.go:89] "coredns-5d78c9869d-7bhk2" [113dbc11-1279-4188-b57f-ef1a7476354e] Running
	I0717 19:17:37.868020  318888 system_pods.go:89] "etcd-pause-795576" [cb60766e-050b-459f-ab27-b4eb96c1cfb1] Running
	I0717 19:17:37.868025  318888 system_pods.go:89] "kindnet-blwth" [7367b120-9ad2-48ef-a098-f9427cd70ce7] Running
	I0717 19:17:37.868032  318888 system_pods.go:89] "kube-apiserver-pause-795576" [deacff2a-f4f5-4573-985b-f50aec648951] Running
	I0717 19:17:37.868036  318888 system_pods.go:89] "kube-controller-manager-pause-795576" [7fe105ea-5ec8-4082-8c94-109c5613c844] Running
	I0717 19:17:37.868041  318888 system_pods.go:89] "kube-proxy-vcv28" [543aec10-6af6-4088-941a-d684da877b3f] Running
	I0717 19:17:37.868045  318888 system_pods.go:89] "kube-scheduler-pause-795576" [282169f5-c63d-4d71-9dd5-180ca707ac61] Running
	I0717 19:17:37.868051  318888 system_pods.go:126] duration metric: took 203.827832ms to wait for k8s-apps to be running ...
	I0717 19:17:37.868058  318888 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 19:17:37.868104  318888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:17:37.882536  318888 system_svc.go:56] duration metric: took 14.46342ms WaitForService to wait for kubelet.
	I0717 19:17:37.882566  318888 kubeadm.go:581] duration metric: took 2.982428447s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 19:17:37.882591  318888 node_conditions.go:102] verifying NodePressure condition ...
	I0717 19:17:38.064900  318888 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0717 19:17:38.064924  318888 node_conditions.go:123] node cpu capacity is 8
	I0717 19:17:38.064935  318888 node_conditions.go:105] duration metric: took 182.337085ms to run NodePressure ...
	I0717 19:17:38.064945  318888 start.go:228] waiting for startup goroutines ...
	I0717 19:17:38.064951  318888 start.go:233] waiting for cluster config update ...
	I0717 19:17:38.064958  318888 start.go:242] writing updated cluster config ...
	I0717 19:17:38.065224  318888 ssh_runner.go:195] Run: rm -f paused
	I0717 19:17:38.120289  318888 start.go:578] kubectl: 1.27.3, cluster: 1.27.3 (minor skew: 0)
	I0717 19:17:38.122981  318888 out.go:177] * Done! kubectl is now configured to use "pause-795576" cluster and "default" namespace by default
	I0717 19:17:34.989721  320967 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.18.0: (2.043367502s)
	I0717 19:17:34.989750  320967 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 from cache
	I0717 19:17:34.989777  320967 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.18.0
	I0717 19:17:34.989833  320967 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.18.0
	I0717 19:17:35.933692  320967 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 from cache
	I0717 19:17:35.933740  320967 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.18.0
	I0717 19:17:35.933798  320967 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.18.0
	I0717 19:17:38.076474  320967 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.18.0: (2.142640687s)
	I0717 19:17:38.076506  320967 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16890-138069/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 from cache
	I0717 19:17:38.076542  320967 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.4.3-0
	I0717 19:17:38.076605  320967 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.4.3-0
	
	* 
	* ==> CRI-O <==
	* Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.189965051Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.190034948Z" level=warning msg="Allowed annotations are specified for workload []"
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.219870677Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/c8b2e7919c165578172869b71ee2fb4ee5ee2cb3be7847b03e13b3bd86c4f451/merged/etc/passwd: no such file or directory"
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.219927612Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c8b2e7919c165578172869b71ee2fb4ee5ee2cb3be7847b03e13b3bd86c4f451/merged/etc/group: no such file or directory"
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.293482311Z" level=info msg="Created container c7d9925d4d034c86f811fcaa0fc3e82d9e6c6d2aa3586c572cb69b949b380aae: kube-system/kube-proxy-vcv28/kube-proxy" id=0c7c45e8-726c-4529-942f-15fb262b27eb name=/runtime.v1.RuntimeService/CreateContainer
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.293728547Z" level=info msg="Created container 50720c457cc27be585b0bcec78fc0350e552eacbc5e5d3985113f7cdfffb3ec1: kube-system/coredns-5d78c9869d-7bhk2/coredns" id=e687d198-1472-4548-b4c3-03717ded0a5d name=/runtime.v1.RuntimeService/CreateContainer
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.294247896Z" level=info msg="Starting container: 50720c457cc27be585b0bcec78fc0350e552eacbc5e5d3985113f7cdfffb3ec1" id=d133d45a-4093-424c-b314-2453e869b54c name=/runtime.v1.RuntimeService/StartContainer
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.294430824Z" level=info msg="Starting container: c7d9925d4d034c86f811fcaa0fc3e82d9e6c6d2aa3586c572cb69b949b380aae" id=ce4a81de-6905-4e59-94c4-f8d7989578e5 name=/runtime.v1.RuntimeService/StartContainer
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.294771190Z" level=info msg="Created container a5b342c3188d4dada0219660ad4c433081d03f89457a114d9d6e0e04ee02126e: kube-system/kindnet-blwth/kindnet-cni" id=f0a1b2f5-44e0-4d5e-be59-a97f99214b0f name=/runtime.v1.RuntimeService/CreateContainer
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.295189765Z" level=info msg="Starting container: a5b342c3188d4dada0219660ad4c433081d03f89457a114d9d6e0e04ee02126e" id=1ed90718-a316-4779-ba8a-9c9e9f40121a name=/runtime.v1.RuntimeService/StartContainer
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.304472948Z" level=info msg="Started container" PID=3717 containerID=50720c457cc27be585b0bcec78fc0350e552eacbc5e5d3985113f7cdfffb3ec1 description=kube-system/coredns-5d78c9869d-7bhk2/coredns id=d133d45a-4093-424c-b314-2453e869b54c name=/runtime.v1.RuntimeService/StartContainer sandboxID=d649adf698c9dafde02b8a12fb695beb81795107e7d027d64cadfd235bb2ac80
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.306525558Z" level=info msg="Started container" PID=3720 containerID=a5b342c3188d4dada0219660ad4c433081d03f89457a114d9d6e0e04ee02126e description=kube-system/kindnet-blwth/kindnet-cni id=1ed90718-a316-4779-ba8a-9c9e9f40121a name=/runtime.v1.RuntimeService/StartContainer sandboxID=52b5cc4aad2ad9be691effa49714cc8f6b39045961a40662dd74c5acc9780241
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.307693336Z" level=info msg="Started container" PID=3721 containerID=c7d9925d4d034c86f811fcaa0fc3e82d9e6c6d2aa3586c572cb69b949b380aae description=kube-system/kube-proxy-vcv28/kube-proxy id=ce4a81de-6905-4e59-94c4-f8d7989578e5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=be9d0f26dd7c3ab191a5abf36da714632cbd0f3cda9ce14b052bad43e9c67620
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.766821565Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.771178384Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.771213859Z" level=info msg="Updated default CNI network name to kindnet"
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.771232766Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.774766709Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.774795534Z" level=info msg="Updated default CNI network name to kindnet"
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.774813040Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.778219803Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.778248616Z" level=info msg="Updated default CNI network name to kindnet"
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.778260062Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.781582554Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jul 17 19:17:23 pause-795576 crio[2869]: time="2023-07-17 19:17:23.781609794Z" level=info msg="Updated default CNI network name to kindnet"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c7d9925d4d034       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c   18 seconds ago       Running             kube-proxy                1                   be9d0f26dd7c3       kube-proxy-vcv28
	50720c457cc27       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   18 seconds ago       Running             coredns                   1                   d649adf698c9d       coredns-5d78c9869d-7bhk2
	a5b342c3188d4       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da   18 seconds ago       Running             kindnet-cni               1                   52b5cc4aad2ad       kindnet-blwth
	03e6c6fd4ceca       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f   21 seconds ago       Running             kube-controller-manager   2                   20ae7a52e8589       kube-controller-manager-pause-795576
	cedfc31e11d52       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a   21 seconds ago       Running             kube-apiserver            2                   a59836c7236b4       kube-apiserver-pause-795576
	f2971fae983c3       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a   21 seconds ago       Running             kube-scheduler            3                   8c0aa2dd28d39       kube-scheduler-pause-795576
	b13a19d103774       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681   22 seconds ago       Running             etcd                      2                   977426b5ad0d4       etcd-pause-795576
	ab7184693b853       41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a   24 seconds ago       Exited              kube-scheduler            2                   8c0aa2dd28d39       kube-scheduler-pause-795576
	3c4ecc96bf0b9       08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a   36 seconds ago       Exited              kube-apiserver            1                   a59836c7236b4       kube-apiserver-pause-795576
	d061dfba20f3f       7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f   36 seconds ago       Exited              kube-controller-manager   1                   20ae7a52e8589       kube-controller-manager-pause-795576
	f6dec920e96cf       86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681   36 seconds ago       Exited              etcd                      1                   977426b5ad0d4       etcd-pause-795576
	e10be0e9af17f       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   46 seconds ago       Exited              coredns                   0                   d649adf698c9d       coredns-5d78c9869d-7bhk2
	6c2d784dbdd18       5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c   About a minute ago   Exited              kube-proxy                0                   be9d0f26dd7c3       kube-proxy-vcv28
	249f2d6748858       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da   About a minute ago   Exited              kindnet-cni               0                   52b5cc4aad2ad       kindnet-blwth
	
	* 
	* ==> coredns [50720c457cc27be585b0bcec78fc0350e552eacbc5e5d3985113f7cdfffb3ec1] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50524 - 8696 "HINFO IN 4174031737280363131.579355201474769177. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.02890859s
	
	* 
	* ==> coredns [e10be0e9af17f8987e24e478a481f2c0799874fb1457f4c96c792fa327c1e1fe] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51811 - 41665 "HINFO IN 4362517523526051086.6698439147695089153. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010873973s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-795576
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-795576
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5
	                    minikube.k8s.io/name=pause-795576
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T19_16_11_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 19:16:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-795576
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 19:17:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 19:17:22 +0000   Mon, 17 Jul 2023 19:16:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 19:17:22 +0000   Mon, 17 Jul 2023 19:16:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 19:17:22 +0000   Mon, 17 Jul 2023 19:16:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 19:17:22 +0000   Mon, 17 Jul 2023 19:16:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    pause-795576
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859436Ki
	  pods:               110
	System Info:
	  Machine ID:                 18132160ceaa4c97a29b2e91cfb68c63
	  System UUID:                6a23a9c6-456f-460a-acc3-5ceeb9d277a9
	  Boot ID:                    72066744-0b12-457f-a61f-5086cdf4a210
	  Kernel Version:             5.15.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-7bhk2                100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     78s
	  kube-system                 etcd-pause-795576                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         91s
	  kube-system                 kindnet-blwth                           100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      78s
	  kube-system                 kube-apiserver-pause-795576             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-controller-manager-pause-795576    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-proxy-vcv28                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-scheduler-pause-795576             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 77s                  kube-proxy       
	  Normal  Starting                 18s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  101s (x8 over 101s)  kubelet          Node pause-795576 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    101s (x8 over 101s)  kubelet          Node pause-795576 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     101s (x8 over 101s)  kubelet          Node pause-795576 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     92s                  kubelet          Node pause-795576 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  92s                  kubelet          Node pause-795576 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    92s                  kubelet          Node pause-795576 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 92s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           79s                  node-controller  Node pause-795576 event: Registered Node pause-795576 in Controller
	  Normal  NodeReady                47s                  kubelet          Node pause-795576 status is now: NodeReady
	  Normal  Starting                 23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 23s)    kubelet          Node pause-795576 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 23s)    kubelet          Node pause-795576 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x8 over 23s)    kubelet          Node pause-795576 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7s                   node-controller  Node pause-795576 event: Registered Node pause-795576 in Controller
	
	* 
	* ==> dmesg <==
	* [  +4.255707] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-743d16d82889
	[  +0.000007] ll header: 00000000: 02 42 b6 c0 17 7b 02 42 c0 a8 3a 02 08 00
	[  +8.191422] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-743d16d82889
	[  +0.000024] ll header: 00000000: 02 42 b6 c0 17 7b 02 42 c0 a8 3a 02 08 00
	[Jul17 19:08] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-743d16d82889
	[  +0.000009] ll header: 00000000: 02 42 b6 c0 17 7b 02 42 c0 a8 3a 02 08 00
	[  +1.009828] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-743d16d82889
	[  +0.000006] ll header: 00000000: 02 42 b6 c0 17 7b 02 42 c0 a8 3a 02 08 00
	[  +2.015844] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-743d16d82889
	[  +0.000006] ll header: 00000000: 02 42 b6 c0 17 7b 02 42 c0 a8 3a 02 08 00
	[  +4.219847] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-743d16d82889
	[  +0.000006] ll header: 00000000: 02 42 b6 c0 17 7b 02 42 c0 a8 3a 02 08 00
	[  +8.195274] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-743d16d82889
	[  +0.000007] ll header: 00000000: 02 42 b6 c0 17 7b 02 42 c0 a8 3a 02 08 00
	[Jul17 19:11] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-f7333153fb0a
	[  +0.000009] ll header: 00000000: 02 42 f3 7a f9 00 02 42 c0 a8 43 02 08 00
	[  +1.022155] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-f7333153fb0a
	[  +0.000006] ll header: 00000000: 02 42 f3 7a f9 00 02 42 c0 a8 43 02 08 00
	[  +2.011847] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-f7333153fb0a
	[  +0.000028] ll header: 00000000: 02 42 f3 7a f9 00 02 42 c0 a8 43 02 08 00
	[  +4.159649] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-f7333153fb0a
	[  +0.000006] ll header: 00000000: 02 42 f3 7a f9 00 02 42 c0 a8 43 02 08 00
	[  +8.195411] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-f7333153fb0a
	[  +0.000006] ll header: 00000000: 02 42 f3 7a f9 00 02 42 c0 a8 43 02 08 00
	[Jul17 19:14] process 'docker/tmp/qemu-check188754489/check' started with executable stack
	
	* 
	* ==> etcd [b13a19d103774971cbc8e8ba48f201f85edbf7be76691036eb06342cc2c22061] <==
	* {"level":"info","ts":"2023-07-17T19:17:19.837Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-17T19:17:19.837Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-17T19:17:19.837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2023-07-17T19:17:19.837Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2023-07-17T19:17:19.838Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T19:17:19.838Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T19:17:19.839Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-07-17T19:17:19.840Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-17T19:17:19.840Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-17T19:17:19.840Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-07-17T19:17:19.840Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-07-17T19:17:21.115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2023-07-17T19:17:21.115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-07-17T19:17:21.115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2023-07-17T19:17:21.115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2023-07-17T19:17:21.115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-07-17T19:17:21.115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2023-07-17T19:17:21.115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-07-17T19:17:21.230Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:pause-795576 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-17T19:17:21.230Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T19:17:21.230Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T19:17:21.231Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-17T19:17:21.231Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-17T19:17:21.232Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-07-17T19:17:21.232Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> etcd [f6dec920e96cf327838fec0d74079f5434ac14535a8435b632e013e75fdb381f] <==
	* {"level":"info","ts":"2023-07-17T19:17:05.282Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"948.071µs"}
	{"level":"info","ts":"2023-07-17T19:17:05.284Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2023-07-17T19:17:05.294Z","caller":"etcdserver/raft.go:529","msg":"restarting local member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","commit-index":452}
	{"level":"info","ts":"2023-07-17T19:17:05.294Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=()"}
	{"level":"info","ts":"2023-07-17T19:17:05.294Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became follower at term 2"}
	{"level":"info","ts":"2023-07-17T19:17:05.294Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 8688e899f7831fc7 [peers: [], term: 2, commit: 452, applied: 0, lastindex: 452, lastterm: 2]"}
	{"level":"warn","ts":"2023-07-17T19:17:05.295Z","caller":"auth/store.go:1234","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2023-07-17T19:17:05.363Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":433}
	{"level":"info","ts":"2023-07-17T19:17:05.365Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2023-07-17T19:17:05.367Z","caller":"etcdserver/corrupt.go:95","msg":"starting initial corruption check","local-member-id":"8688e899f7831fc7","timeout":"7s"}
	{"level":"info","ts":"2023-07-17T19:17:05.367Z","caller":"etcdserver/corrupt.go:165","msg":"initial corruption checking passed; no corruption","local-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2023-07-17T19:17:05.367Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"8688e899f7831fc7","local-server-version":"3.5.7","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-07-17T19:17:05.367Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-17T19:17:05.367Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-07-17T19:17:05.367Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-17T19:17:05.367Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-17T19:17:05.368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2023-07-17T19:17:05.368Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2023-07-17T19:17:05.368Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T19:17:05.368Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T19:17:05.373Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-07-17T19:17:05.374Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-07-17T19:17:05.374Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-07-17T19:17:05.374Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-17T19:17:05.374Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	* 
	* ==> kernel <==
	*  19:17:42 up  4:00,  0 users,  load average: 4.34, 3.64, 2.34
	Linux pause-795576 5.15.0-1037-gcp #45~20.04.1-Ubuntu SMP Thu Jun 22 08:31:09 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [249f2d6748858a003aca4fb5b25038d813f220e6af0a35223e06266835417555] <==
	* I0717 19:16:23.969753       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0717 19:16:23.975535       1 main.go:107] hostIP = 192.168.67.2
	podIP = 192.168.67.2
	I0717 19:16:23.980786       1 main.go:116] setting mtu 1500 for CNI 
	I0717 19:16:23.980830       1 main.go:146] kindnetd IP family: "ipv4"
	I0717 19:16:23.980848       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0717 19:16:54.308764       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0717 19:16:54.324371       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0717 19:16:54.324500       1 main.go:227] handling current node
	
	* 
	* ==> kindnet [a5b342c3188d4dada0219660ad4c433081d03f89457a114d9d6e0e04ee02126e] <==
	* I0717 19:17:23.371239       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0717 19:17:23.371335       1 main.go:107] hostIP = 192.168.67.2
	podIP = 192.168.67.2
	I0717 19:17:23.371492       1 main.go:116] setting mtu 1500 for CNI 
	I0717 19:17:23.371507       1 main.go:146] kindnetd IP family: "ipv4"
	I0717 19:17:23.371541       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0717 19:17:23.766546       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0717 19:17:23.766572       1 main.go:227] handling current node
	I0717 19:17:33.785147       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0717 19:17:33.785173       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [3c4ecc96bf0b93221b1cd41bd5f1526256df3f33b13568ef0d2794d1eb83eca3] <==
	* I0717 19:17:05.492802       1 server.go:553] external host was not specified, using 192.168.67.2
	I0717 19:17:05.495140       1 server.go:166] Version: v1.27.3
	I0717 19:17:05.495187       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	* 
	* ==> kube-apiserver [cedfc31e11d529607b3ee5ca35c5cce028b584be899fe6d4a49d88a77aad3495] <==
	* I0717 19:17:22.391297       1 aggregator.go:150] waiting for initial CRD sync...
	I0717 19:17:22.391624       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0717 19:17:22.393082       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I0717 19:17:22.412061       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0717 19:17:22.499665       1 controller.go:155] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0717 19:17:22.566638       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0717 19:17:22.587330       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 19:17:22.591320       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0717 19:17:22.591343       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0717 19:17:22.591502       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0717 19:17:22.591524       1 aggregator.go:152] initial CRD sync complete...
	I0717 19:17:22.591531       1 autoregister_controller.go:141] Starting autoregister controller
	I0717 19:17:22.591537       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 19:17:22.591544       1 cache.go:39] Caches are synced for autoregister controller
	I0717 19:17:22.591699       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 19:17:22.593534       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0717 19:17:22.593673       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 19:17:22.662183       1 shared_informer.go:318] Caches are synced for configmaps
	I0717 19:17:23.140996       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0717 19:17:23.397058       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 19:17:24.593082       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0717 19:17:24.689958       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0717 19:17:24.698330       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0717 19:17:24.810171       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 19:17:24.819068       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-controller-manager [03e6c6fd4ceca064612bdbf851f465381b9ec0dc6e2ab6a3dca077888376c88f] <==
	* I0717 19:17:34.790868       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0717 19:17:34.790900       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0717 19:17:34.796130       1 shared_informer.go:318] Caches are synced for endpoint
	I0717 19:17:34.802002       1 shared_informer.go:318] Caches are synced for GC
	I0717 19:17:34.813319       1 shared_informer.go:318] Caches are synced for HPA
	I0717 19:17:34.817525       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0717 19:17:34.817562       1 shared_informer.go:318] Caches are synced for taint
	I0717 19:17:34.817702       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0717 19:17:34.817688       1 node_lifecycle_controller.go:1223] "Initializing eviction metric for zone" zone=""
	I0717 19:17:34.817758       1 taint_manager.go:211] "Sending events to api server"
	I0717 19:17:34.817811       1 event.go:307] "Event occurred" object="pause-795576" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-795576 event: Registered Node pause-795576 in Controller"
	I0717 19:17:34.817864       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-795576"
	I0717 19:17:34.817932       1 node_lifecycle_controller.go:1069] "Controller detected that zone is now in new state" zone="" newState=Normal
	I0717 19:17:34.820060       1 shared_informer.go:318] Caches are synced for crt configmap
	I0717 19:17:34.822381       1 shared_informer.go:318] Caches are synced for job
	I0717 19:17:34.845101       1 shared_informer.go:318] Caches are synced for daemon sets
	I0717 19:17:34.888241       1 shared_informer.go:318] Caches are synced for stateful set
	I0717 19:17:34.906579       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0717 19:17:34.924305       1 shared_informer.go:318] Caches are synced for deployment
	I0717 19:17:34.949261       1 shared_informer.go:318] Caches are synced for disruption
	I0717 19:17:34.982902       1 shared_informer.go:318] Caches are synced for resource quota
	I0717 19:17:34.997419       1 shared_informer.go:318] Caches are synced for resource quota
	I0717 19:17:35.313169       1 shared_informer.go:318] Caches are synced for garbage collector
	I0717 19:17:35.313204       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0717 19:17:35.329674       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-controller-manager [d061dfba20f3f98ca0245fe20749ce94924338455dffdddfee06f3279e0c6ff8] <==
	* 
	* 
	* ==> kube-proxy [6c2d784dbdd1891bc8ee8488658197313688eb7036abdb4530230dcf2eccb3fa] <==
	* I0717 19:16:24.216895       1 node.go:141] Successfully retrieved node IP: 192.168.67.2
	I0717 19:16:24.217025       1 server_others.go:110] "Detected node IP" address="192.168.67.2"
	I0717 19:16:24.217063       1 server_others.go:554] "Using iptables proxy"
	I0717 19:16:24.417185       1 server_others.go:192] "Using iptables Proxier"
	I0717 19:16:24.417300       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0717 19:16:24.417337       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0717 19:16:24.417381       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0717 19:16:24.417438       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 19:16:24.418155       1 server.go:658] "Version info" version="v1.27.3"
	I0717 19:16:24.418441       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 19:16:24.419037       1 config.go:188] "Starting service config controller"
	I0717 19:16:24.419125       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 19:16:24.419074       1 config.go:97] "Starting endpoint slice config controller"
	I0717 19:16:24.420055       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 19:16:24.419460       1 config.go:315] "Starting node config controller"
	I0717 19:16:24.420155       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 19:16:24.519240       1 shared_informer.go:318] Caches are synced for service config
	I0717 19:16:24.522185       1 shared_informer.go:318] Caches are synced for node config
	I0717 19:16:24.522328       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [c7d9925d4d034c86f811fcaa0fc3e82d9e6c6d2aa3586c572cb69b949b380aae] <==
	* I0717 19:17:23.482651       1 node.go:141] Successfully retrieved node IP: 192.168.67.2
	I0717 19:17:23.482745       1 server_others.go:110] "Detected node IP" address="192.168.67.2"
	I0717 19:17:23.482776       1 server_others.go:554] "Using iptables proxy"
	I0717 19:17:23.503860       1 server_others.go:192] "Using iptables Proxier"
	I0717 19:17:23.503912       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0717 19:17:23.503927       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0717 19:17:23.503943       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0717 19:17:23.504056       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 19:17:23.504781       1 server.go:658] "Version info" version="v1.27.3"
	I0717 19:17:23.504801       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 19:17:23.505456       1 config.go:188] "Starting service config controller"
	I0717 19:17:23.505487       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 19:17:23.505537       1 config.go:315] "Starting node config controller"
	I0717 19:17:23.505555       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 19:17:23.505649       1 config.go:97] "Starting endpoint slice config controller"
	I0717 19:17:23.505892       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 19:17:23.606224       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0717 19:17:23.606325       1 shared_informer.go:318] Caches are synced for node config
	I0717 19:17:23.606333       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [ab7184693b8535872a6449bd84279882db6966e0d108be297584389fcbd446cd] <==
	* I0717 19:17:17.222310       1 serving.go:348] Generated self-signed cert in-memory
	W0717 19:17:17.444746       1 authentication.go:368] Error looking up in-cluster authentication configuration: Get "https://192.168.67.2:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.67.2:8443: connect: connection refused
	W0717 19:17:17.444795       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 19:17:17.444803       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 19:17:17.447621       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.3"
	I0717 19:17:17.447646       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 19:17:17.448801       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 19:17:17.448841       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0717 19:17:17.448865       1 shared_informer.go:314] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 19:17:17.448877       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 19:17:17.449448       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0717 19:17:17.449470       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 19:17:17.449486       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0717 19:17:17.449645       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [f2971fae983c36039ba84ce578e2ef2b500468b090b10b9309dfd36f30cb0e41] <==
	* I0717 19:17:20.581449       1 serving.go:348] Generated self-signed cert in-memory
	W0717 19:17:22.470452       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 19:17:22.470489       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 19:17:22.470504       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 19:17:22.470517       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 19:17:22.567509       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.3"
	I0717 19:17:22.567620       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 19:17:22.570174       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 19:17:22.570281       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 19:17:22.570809       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0717 19:17:22.571433       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 19:17:22.670820       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Jul 17 19:17:20 pause-795576 kubelet[3413]: I0717 19:17:20.387492    3413 kubelet_node_status.go:70] "Attempting to register node" node="pause-795576"
	Jul 17 19:17:22 pause-795576 kubelet[3413]: I0717 19:17:22.588489    3413 kubelet_node_status.go:108] "Node was previously registered" node="pause-795576"
	Jul 17 19:17:22 pause-795576 kubelet[3413]: I0717 19:17:22.588599    3413 kubelet_node_status.go:73] "Successfully registered node" node="pause-795576"
	Jul 17 19:17:22 pause-795576 kubelet[3413]: I0717 19:17:22.590307    3413 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 17 19:17:22 pause-795576 kubelet[3413]: I0717 19:17:22.591362    3413 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 17 19:17:22 pause-795576 kubelet[3413]: I0717 19:17:22.876082    3413 apiserver.go:52] "Watching apiserver"
	Jul 17 19:17:22 pause-795576 kubelet[3413]: I0717 19:17:22.879570    3413 topology_manager.go:212] "Topology Admit Handler"
	Jul 17 19:17:22 pause-795576 kubelet[3413]: I0717 19:17:22.880108    3413 topology_manager.go:212] "Topology Admit Handler"
	Jul 17 19:17:22 pause-795576 kubelet[3413]: I0717 19:17:22.880226    3413 topology_manager.go:212] "Topology Admit Handler"
	Jul 17 19:17:22 pause-795576 kubelet[3413]: I0717 19:17:22.962231    3413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/543aec10-6af6-4088-941a-d684da877b3f-kube-proxy\") pod \"kube-proxy-vcv28\" (UID: \"543aec10-6af6-4088-941a-d684da877b3f\") " pod="kube-system/kube-proxy-vcv28"
	Jul 17 19:17:22 pause-795576 kubelet[3413]: I0717 19:17:22.962302    3413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/113dbc11-1279-4188-b57f-ef1a7476354e-config-volume\") pod \"coredns-5d78c9869d-7bhk2\" (UID: \"113dbc11-1279-4188-b57f-ef1a7476354e\") " pod="kube-system/coredns-5d78c9869d-7bhk2"
	Jul 17 19:17:22 pause-795576 kubelet[3413]: I0717 19:17:22.962334    3413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/543aec10-6af6-4088-941a-d684da877b3f-xtables-lock\") pod \"kube-proxy-vcv28\" (UID: \"543aec10-6af6-4088-941a-d684da877b3f\") " pod="kube-system/kube-proxy-vcv28"
	Jul 17 19:17:22 pause-795576 kubelet[3413]: I0717 19:17:22.962361    3413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/543aec10-6af6-4088-941a-d684da877b3f-lib-modules\") pod \"kube-proxy-vcv28\" (UID: \"543aec10-6af6-4088-941a-d684da877b3f\") " pod="kube-system/kube-proxy-vcv28"
	Jul 17 19:17:22 pause-795576 kubelet[3413]: I0717 19:17:22.962388    3413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh7kg\" (UniqueName: \"kubernetes.io/projected/543aec10-6af6-4088-941a-d684da877b3f-kube-api-access-hh7kg\") pod \"kube-proxy-vcv28\" (UID: \"543aec10-6af6-4088-941a-d684da877b3f\") " pod="kube-system/kube-proxy-vcv28"
	Jul 17 19:17:22 pause-795576 kubelet[3413]: I0717 19:17:22.962416    3413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8bj6\" (UniqueName: \"kubernetes.io/projected/113dbc11-1279-4188-b57f-ef1a7476354e-kube-api-access-k8bj6\") pod \"coredns-5d78c9869d-7bhk2\" (UID: \"113dbc11-1279-4188-b57f-ef1a7476354e\") " pod="kube-system/coredns-5d78c9869d-7bhk2"
	Jul 17 19:17:22 pause-795576 kubelet[3413]: I0717 19:17:22.980306    3413 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world"
	Jul 17 19:17:23 pause-795576 kubelet[3413]: E0717 19:17:23.000618    3413 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-795576\" already exists" pod="kube-system/kube-apiserver-pause-795576"
	Jul 17 19:17:23 pause-795576 kubelet[3413]: I0717 19:17:23.063170    3413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7367b120-9ad2-48ef-a098-f9427cd70ce7-xtables-lock\") pod \"kindnet-blwth\" (UID: \"7367b120-9ad2-48ef-a098-f9427cd70ce7\") " pod="kube-system/kindnet-blwth"
	Jul 17 19:17:23 pause-795576 kubelet[3413]: I0717 19:17:23.063238    3413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7367b120-9ad2-48ef-a098-f9427cd70ce7-lib-modules\") pod \"kindnet-blwth\" (UID: \"7367b120-9ad2-48ef-a098-f9427cd70ce7\") " pod="kube-system/kindnet-blwth"
	Jul 17 19:17:23 pause-795576 kubelet[3413]: I0717 19:17:23.063477    3413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7367b120-9ad2-48ef-a098-f9427cd70ce7-cni-cfg\") pod \"kindnet-blwth\" (UID: \"7367b120-9ad2-48ef-a098-f9427cd70ce7\") " pod="kube-system/kindnet-blwth"
	Jul 17 19:17:23 pause-795576 kubelet[3413]: I0717 19:17:23.063542    3413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl564\" (UniqueName: \"kubernetes.io/projected/7367b120-9ad2-48ef-a098-f9427cd70ce7-kube-api-access-cl564\") pod \"kindnet-blwth\" (UID: \"7367b120-9ad2-48ef-a098-f9427cd70ce7\") " pod="kube-system/kindnet-blwth"
	Jul 17 19:17:23 pause-795576 kubelet[3413]: I0717 19:17:23.063611    3413 reconciler.go:41] "Reconciler: start to sync state"
	Jul 17 19:17:23 pause-795576 kubelet[3413]: I0717 19:17:23.180857    3413 scope.go:115] "RemoveContainer" containerID="249f2d6748858a003aca4fb5b25038d813f220e6af0a35223e06266835417555"
	Jul 17 19:17:23 pause-795576 kubelet[3413]: I0717 19:17:23.184076    3413 scope.go:115] "RemoveContainer" containerID="e10be0e9af17f8987e24e478a481f2c0799874fb1457f4c96c792fa327c1e1fe"
	Jul 17 19:17:23 pause-795576 kubelet[3413]: I0717 19:17:23.184711    3413 scope.go:115] "RemoveContainer" containerID="6c2d784dbdd1891bc8ee8488658197313688eb7036abdb4530230dcf2eccb3fa"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-795576 -n pause-795576
helpers_test.go:261: (dbg) Run:  kubectl --context pause-795576 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (45.59s)

                                                
                                    

Test pass (266/298)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 4.98
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.06
10 TestDownloadOnly/v1.27.3/json-events 4.84
11 TestDownloadOnly/v1.27.3/preload-exists 0
15 TestDownloadOnly/v1.27.3/LogsDuration 0.06
16 TestDownloadOnly/DeleteAll 0.2
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.12
18 TestDownloadOnlyKic 1.19
19 TestBinaryMirror 0.7
20 TestOffline 84.97
22 TestAddons/Setup 123.48
24 TestAddons/parallel/Registry 13.81
26 TestAddons/parallel/InspektorGadget 10.74
27 TestAddons/parallel/MetricsServer 5.95
28 TestAddons/parallel/HelmTiller 9.65
30 TestAddons/parallel/CSI 50.97
31 TestAddons/parallel/Headlamp 12.3
32 TestAddons/parallel/CloudSpanner 5.71
35 TestAddons/serial/GCPAuth/Namespaces 0.12
36 TestAddons/StoppedEnableDisable 12.14
37 TestCertOptions 28.45
38 TestCertExpiration 232.09
40 TestForceSystemdFlag 33.1
41 TestForceSystemdEnv 34.05
43 TestKVMDriverInstallOrUpdate 1.61
48 TestErrorSpam/start 0.58
49 TestErrorSpam/status 0.85
50 TestErrorSpam/pause 1.49
51 TestErrorSpam/unpause 1.48
52 TestErrorSpam/stop 1.36
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 40.25
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 42.73
59 TestFunctional/serial/KubeContext 0.05
60 TestFunctional/serial/KubectlGetPods 0.07
63 TestFunctional/serial/CacheCmd/cache/add_remote 2.73
64 TestFunctional/serial/CacheCmd/cache/add_local 0.74
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.04
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.26
68 TestFunctional/serial/CacheCmd/cache/cache_reload 1.6
69 TestFunctional/serial/CacheCmd/cache/delete 0.09
70 TestFunctional/serial/MinikubeKubectlCmd 0.1
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
72 TestFunctional/serial/ExtraConfig 32.89
73 TestFunctional/serial/ComponentHealth 0.07
74 TestFunctional/serial/LogsCmd 1.36
75 TestFunctional/serial/LogsFileCmd 1.36
76 TestFunctional/serial/InvalidService 4.37
78 TestFunctional/parallel/ConfigCmd 0.35
79 TestFunctional/parallel/DashboardCmd 9.06
80 TestFunctional/parallel/DryRun 0.48
81 TestFunctional/parallel/InternationalLanguage 0.16
82 TestFunctional/parallel/StatusCmd 0.91
86 TestFunctional/parallel/ServiceCmdConnect 9.56
87 TestFunctional/parallel/AddonsCmd 0.13
88 TestFunctional/parallel/PersistentVolumeClaim 28.97
90 TestFunctional/parallel/SSHCmd 0.62
91 TestFunctional/parallel/CpCmd 1.37
92 TestFunctional/parallel/MySQL 20.43
93 TestFunctional/parallel/FileSync 0.33
94 TestFunctional/parallel/CertSync 1.81
98 TestFunctional/parallel/NodeLabels 0.09
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.55
102 TestFunctional/parallel/License 0.13
103 TestFunctional/parallel/ServiceCmd/DeployApp 10.25
105 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.43
106 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
108 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.4
109 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
110 TestFunctional/parallel/ServiceCmd/List 0.47
111 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
115 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
116 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
117 TestFunctional/parallel/ProfileCmd/profile_not_create 0.36
118 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
119 TestFunctional/parallel/MountCmd/any-port 7.21
120 TestFunctional/parallel/ProfileCmd/profile_list 0.35
121 TestFunctional/parallel/ServiceCmd/Format 0.34
122 TestFunctional/parallel/ProfileCmd/profile_json_output 0.34
123 TestFunctional/parallel/ServiceCmd/URL 0.37
124 TestFunctional/parallel/Version/short 0.05
125 TestFunctional/parallel/Version/components 1.54
126 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
127 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
128 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
129 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
130 TestFunctional/parallel/ImageCommands/ImageBuild 2.01
131 TestFunctional/parallel/ImageCommands/Setup 1
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.21
133 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
134 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
135 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
136 TestFunctional/parallel/MountCmd/specific-port 2.11
137 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.39
138 TestFunctional/parallel/MountCmd/VerifyCleanup 2.12
140 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.82
141 TestFunctional/parallel/ImageCommands/ImageRemove 0.46
142 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.17
143 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.19
144 TestFunctional/delete_addon-resizer_images 0.07
145 TestFunctional/delete_my-image_image 0.01
146 TestFunctional/delete_minikube_cached_images 0.02
150 TestIngressAddonLegacy/StartLegacyK8sCluster 67.39
152 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 9.29
153 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.56
157 TestJSONOutput/start/Command 69.35
158 TestJSONOutput/start/Audit 0
160 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
163 TestJSONOutput/pause/Command 0.64
164 TestJSONOutput/pause/Audit 0
166 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/unpause/Command 0.6
170 TestJSONOutput/unpause/Audit 0
172 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/stop/Command 5.69
176 TestJSONOutput/stop/Audit 0
178 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
180 TestErrorJSONOutput 0.19
182 TestKicCustomNetwork/create_custom_network 32.89
183 TestKicCustomNetwork/use_default_bridge_network 24.74
184 TestKicExistingNetwork 23.99
185 TestKicCustomSubnet 27.64
186 TestKicStaticIP 25.84
187 TestMainNoArgs 0.05
188 TestMinikubeProfile 50.59
191 TestMountStart/serial/StartWithMountFirst 5.02
192 TestMountStart/serial/VerifyMountFirst 0.23
193 TestMountStart/serial/StartWithMountSecond 5.46
194 TestMountStart/serial/VerifyMountSecond 0.24
195 TestMountStart/serial/DeleteFirst 1.6
196 TestMountStart/serial/VerifyMountPostDelete 0.24
197 TestMountStart/serial/Stop 1.19
198 TestMountStart/serial/RestartStopped 6.77
199 TestMountStart/serial/VerifyMountPostStop 0.24
202 TestMultiNode/serial/FreshStart2Nodes 87.93
203 TestMultiNode/serial/DeployApp2Nodes 2.95
205 TestMultiNode/serial/AddNode 21.01
206 TestMultiNode/serial/ProfileList 0.26
207 TestMultiNode/serial/CopyFile 8.81
208 TestMultiNode/serial/StopNode 2.09
209 TestMultiNode/serial/StartAfterStop 10.58
210 TestMultiNode/serial/RestartKeepsNodes 115.18
211 TestMultiNode/serial/DeleteNode 4.63
212 TestMultiNode/serial/StopMultiNode 23.82
213 TestMultiNode/serial/RestartMultiNode 77.73
214 TestMultiNode/serial/ValidateNameConflict 22.86
219 TestPreload 143.72
221 TestScheduledStopUnix 97.47
224 TestInsufficientStorage 12.78
227 TestKubernetesUpgrade 349.91
228 TestMissingContainerUpgrade 124.4
230 TestStoppedBinaryUpgrade/Setup 0.45
231 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
242 TestNoKubernetes/serial/StartWithK8s 36.47
248 TestNetworkPlugins/group/false 8.37
252 TestNoKubernetes/serial/StartWithStopK8s 8.34
253 TestNoKubernetes/serial/Start 6.97
254 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
255 TestNoKubernetes/serial/ProfileList 1.54
256 TestNoKubernetes/serial/Stop 1.32
257 TestNoKubernetes/serial/StartNoArgs 8.41
258 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
260 TestPause/serial/Start 74.44
261 TestStoppedBinaryUpgrade/MinikubeLogs 0.8
264 TestStartStop/group/old-k8s-version/serial/FirstStart 131.62
266 TestStartStop/group/no-preload/serial/FirstStart 55.86
267 TestStartStop/group/no-preload/serial/DeployApp 8.39
268 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.94
269 TestStartStop/group/no-preload/serial/Stop 11.93
270 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
271 TestStartStop/group/no-preload/serial/SecondStart 339.69
272 TestStartStop/group/old-k8s-version/serial/DeployApp 8.46
273 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.83
274 TestStartStop/group/old-k8s-version/serial/Stop 12.07
276 TestStartStop/group/embed-certs/serial/FirstStart 70.54
277 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
278 TestStartStop/group/old-k8s-version/serial/SecondStart 440.14
279 TestStartStop/group/embed-certs/serial/DeployApp 9.43
280 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.98
281 TestStartStop/group/embed-certs/serial/Stop 11.87
282 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
283 TestStartStop/group/embed-certs/serial/SecondStart 336.66
285 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 69.61
286 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.43
287 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.9
288 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.92
289 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
290 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 346.63
291 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 9.02
292 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
293 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
294 TestStartStop/group/no-preload/serial/Pause 2.56
296 TestStartStop/group/newest-cni/serial/FirstStart 35.21
297 TestStartStop/group/newest-cni/serial/DeployApp 0
298 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.84
299 TestStartStop/group/newest-cni/serial/Stop 1.19
300 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
301 TestStartStop/group/newest-cni/serial/SecondStart 26.34
302 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
303 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
304 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.34
305 TestStartStop/group/newest-cni/serial/Pause 2.89
306 TestNetworkPlugins/group/auto/Start 67.92
307 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 12.01
308 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
309 TestNetworkPlugins/group/auto/KubeletFlags 0.26
310 TestNetworkPlugins/group/auto/NetCatPod 10.29
311 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.31
312 TestStartStop/group/embed-certs/serial/Pause 2.74
313 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.09
314 TestNetworkPlugins/group/kindnet/Start 71.02
315 TestNetworkPlugins/group/auto/DNS 0.17
316 TestNetworkPlugins/group/auto/Localhost 0.16
317 TestNetworkPlugins/group/auto/HairPin 0.14
318 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
319 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.3
320 TestStartStop/group/old-k8s-version/serial/Pause 2.9
321 TestNetworkPlugins/group/calico/Start 68.46
322 TestNetworkPlugins/group/custom-flannel/Start 60.2
323 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
324 TestNetworkPlugins/group/kindnet/KubeletFlags 0.27
325 TestNetworkPlugins/group/kindnet/NetCatPod 10.3
326 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 8.02
327 TestNetworkPlugins/group/calico/ControllerPod 5.02
328 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.26
329 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.3
330 TestNetworkPlugins/group/kindnet/DNS 0.16
331 TestNetworkPlugins/group/kindnet/Localhost 0.13
332 TestNetworkPlugins/group/kindnet/HairPin 0.14
333 TestNetworkPlugins/group/calico/KubeletFlags 0.25
334 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
335 TestNetworkPlugins/group/calico/NetCatPod 10.34
336 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.32
337 TestNetworkPlugins/group/custom-flannel/DNS 0.18
338 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.97
339 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
340 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
341 TestNetworkPlugins/group/calico/DNS 0.16
342 TestNetworkPlugins/group/calico/Localhost 0.17
343 TestNetworkPlugins/group/calico/HairPin 0.18
344 TestNetworkPlugins/group/enable-default-cni/Start 43.66
345 TestNetworkPlugins/group/flannel/Start 58.78
346 TestNetworkPlugins/group/bridge/Start 67.36
347 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.27
348 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.39
349 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
350 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
351 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
352 TestNetworkPlugins/group/flannel/ControllerPod 5.02
353 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
354 TestNetworkPlugins/group/flannel/NetCatPod 9.27
355 TestNetworkPlugins/group/flannel/DNS 0.16
356 TestNetworkPlugins/group/flannel/Localhost 0.15
357 TestNetworkPlugins/group/flannel/HairPin 0.14
358 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
359 TestNetworkPlugins/group/bridge/NetCatPod 9.28
360 TestNetworkPlugins/group/bridge/DNS 0.17
361 TestNetworkPlugins/group/bridge/Localhost 0.14
362 TestNetworkPlugins/group/bridge/HairPin 0.17
x
+
TestDownloadOnly/v1.16.0/json-events (4.98s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-884134 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-884134 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.97756408s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (4.98s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-884134
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-884134: exit status 85 (56.995938ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-884134 | jenkins | v1.30.1 | 17 Jul 23 18:45 UTC |          |
	|         | -p download-only-884134        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 18:45:21
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 18:45:21.455931  144834 out.go:296] Setting OutFile to fd 1 ...
	I0717 18:45:21.456100  144834 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 18:45:21.456113  144834 out.go:309] Setting ErrFile to fd 2...
	I0717 18:45:21.456119  144834 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 18:45:21.456319  144834 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-138069/.minikube/bin
	W0717 18:45:21.456439  144834 root.go:314] Error reading config file at /home/jenkins/minikube-integration/16890-138069/.minikube/config/config.json: open /home/jenkins/minikube-integration/16890-138069/.minikube/config/config.json: no such file or directory
	I0717 18:45:21.457022  144834 out.go:303] Setting JSON to true
	I0717 18:45:21.458588  144834 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":12472,"bootTime":1689607049,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:45:21.458655  144834 start.go:138] virtualization: kvm guest
	I0717 18:45:21.461379  144834 out.go:97] [download-only-884134] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 18:45:21.462982  144834 out.go:169] MINIKUBE_LOCATION=16890
	W0717 18:45:21.461485  144834 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/16890-138069/.minikube/cache/preloaded-tarball: no such file or directory
	I0717 18:45:21.461526  144834 notify.go:220] Checking for updates...
	I0717 18:45:21.466175  144834 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:45:21.467849  144834 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16890-138069/kubeconfig
	I0717 18:45:21.469258  144834 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-138069/.minikube
	I0717 18:45:21.470791  144834 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0717 18:45:21.473428  144834 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 18:45:21.473685  144834 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 18:45:21.496372  144834 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 18:45:21.496491  144834 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 18:45:21.863156  144834 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2023-07-17 18:45:21.854301281 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 18:45:21.863274  144834 docker.go:294] overlay module found
	I0717 18:45:21.865303  144834 out.go:97] Using the docker driver based on user configuration
	I0717 18:45:21.865360  144834 start.go:298] selected driver: docker
	I0717 18:45:21.865370  144834 start.go:880] validating driver "docker" against <nil>
	I0717 18:45:21.865466  144834 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 18:45:21.923196  144834 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2023-07-17 18:45:21.914901353 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 18:45:21.923368  144834 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 18:45:21.923874  144834 start_flags.go:382] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0717 18:45:21.924041  144834 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 18:45:21.926155  144834 out.go:169] Using Docker driver with root privileges
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-884134"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/json-events (4.84s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-884134 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-884134 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.840768993s)
--- PASS: TestDownloadOnly/v1.27.3/json-events (4.84s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/preload-exists
--- PASS: TestDownloadOnly/v1.27.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-884134
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-884134: exit status 85 (60.638842ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-884134 | jenkins | v1.30.1 | 17 Jul 23 18:45 UTC |          |
	|         | -p download-only-884134        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-884134 | jenkins | v1.30.1 | 17 Jul 23 18:45 UTC |          |
	|         | -p download-only-884134        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.3   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 18:45:26
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 18:45:26.492549  144980 out.go:296] Setting OutFile to fd 1 ...
	I0717 18:45:26.492674  144980 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 18:45:26.492682  144980 out.go:309] Setting ErrFile to fd 2...
	I0717 18:45:26.492687  144980 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 18:45:26.493198  144980 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-138069/.minikube/bin
	W0717 18:45:26.493414  144980 root.go:314] Error reading config file at /home/jenkins/minikube-integration/16890-138069/.minikube/config/config.json: open /home/jenkins/minikube-integration/16890-138069/.minikube/config/config.json: no such file or directory
	I0717 18:45:26.494252  144980 out.go:303] Setting JSON to true
	I0717 18:45:26.495150  144980 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":12478,"bootTime":1689607049,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:45:26.495216  144980 start.go:138] virtualization: kvm guest
	I0717 18:45:26.496990  144980 out.go:97] [download-only-884134] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 18:45:26.498535  144980 out.go:169] MINIKUBE_LOCATION=16890
	I0717 18:45:26.497153  144980 notify.go:220] Checking for updates...
	I0717 18:45:26.501526  144980 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:45:26.503205  144980 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16890-138069/kubeconfig
	I0717 18:45:26.504593  144980 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-138069/.minikube
	I0717 18:45:26.505962  144980 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-884134"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-884134
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.19s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-134543 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-134543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-134543
--- PASS: TestDownloadOnlyKic (1.19s)

                                                
                                    
x
+
TestBinaryMirror (0.7s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-288495 --alsologtostderr --binary-mirror http://127.0.0.1:45155 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-288495" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-288495
--- PASS: TestBinaryMirror (0.70s)

                                                
                                    
x
+
TestOffline (84.97s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-369384 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-369384 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (1m22.615631178s)
helpers_test.go:175: Cleaning up "offline-crio-369384" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-369384
E0717 19:15:42.943719  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-369384: (2.355411598s)
--- PASS: TestOffline (84.97s)

                                                
                                    
x
+
TestAddons/Setup (123.48s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-646610 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-646610 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m3.481467801s)
--- PASS: TestAddons/Setup (123.48s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 15.834427ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-pjmqt" [9129065f-dc8e-4ce0-812c-402a1e000938] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.011677449s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-rxnkn" [f2533dac-e61a-494a-8898-7192c27a8c23] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.010710406s
addons_test.go:316: (dbg) Run:  kubectl --context addons-646610 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-646610 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-646610 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.609201027s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-amd64 -p addons-646610 ip
2023/07/17 18:47:50 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p addons-646610 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.81s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.74s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-hvj5b" [805be6e6-f650-4b4a-9343-1531ed1e3375] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.008098503s
addons_test.go:817: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-646610
addons_test.go:817: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-646610: (5.727457435s)
--- PASS: TestAddons/parallel/InspektorGadget (10.74s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.95s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 14.028432ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-844d8db974-k86gh" [c2507465-6216-4f38-b0e4-4479e08476b1] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.010476433s
addons_test.go:391: (dbg) Run:  kubectl --context addons-646610 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p addons-646610 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.95s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.65s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 5.089704ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6847666dc-rfbl2" [ef135e71-be72-4bee-9e5a-486848ace98b] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.009200239s
addons_test.go:449: (dbg) Run:  kubectl --context addons-646610 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-646610 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.681369622s)
addons_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p addons-646610 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.65s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.97s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 7.404954ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-646610 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-646610 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-646610 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-646610 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-646610 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-646610 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-646610 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-646610 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-646610 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-646610 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-646610 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [71887e92-a9bd-4da2-9293-81e17a2fcb0d] Pending
helpers_test.go:344: "task-pv-pod" [71887e92-a9bd-4da2-9293-81e17a2fcb0d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [71887e92-a9bd-4da2-9293-81e17a2fcb0d] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.006931272s
addons_test.go:560: (dbg) Run:  kubectl --context addons-646610 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-646610 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-646610 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-646610 delete pod task-pv-pod
addons_test.go:576: (dbg) Run:  kubectl --context addons-646610 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-646610 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-646610 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-646610 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-646610 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-646610 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-646610 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-646610 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-646610 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-646610 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-646610 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-646610 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-646610 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-646610 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-646610 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-646610 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-646610 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [fd40701c-55c0-4e47-b22b-46da028c1f4e] Pending
helpers_test.go:344: "task-pv-pod-restore" [fd40701c-55c0-4e47-b22b-46da028c1f4e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [fd40701c-55c0-4e47-b22b-46da028c1f4e] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.007343776s
addons_test.go:602: (dbg) Run:  kubectl --context addons-646610 delete pod task-pv-pod-restore
addons_test.go:606: (dbg) Run:  kubectl --context addons-646610 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-646610 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-amd64 -p addons-646610 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-amd64 -p addons-646610 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.580634435s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-amd64 -p addons-646610 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (50.97s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-646610 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-646610 --alsologtostderr -v=1: (1.253141906s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-66f6498c69-s9c9w" [6381d089-97ba-4647-b670-ee1f9e52a1c9] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-66f6498c69-s9c9w" [6381d089-97ba-4647-b670-ee1f9e52a1c9] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.046921527s
--- PASS: TestAddons/parallel/Headlamp (12.30s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.71s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-88647b4cb-kz8wd" [1cbcd816-b7cf-4ad7-8780-818f7637071d] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.009343105s
addons_test.go:836: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-646610
--- PASS: TestAddons/parallel/CloudSpanner (5.71s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-646610 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-646610 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.14s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-646610
addons_test.go:148: (dbg) Done: out/minikube-linux-amd64 stop -p addons-646610: (11.910114061s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-646610
addons_test.go:156: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-646610
addons_test.go:161: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-646610
--- PASS: TestAddons/StoppedEnableDisable (12.14s)

                                                
                                    
x
+
TestCertOptions (28.45s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-633397 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-633397 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (25.90410872s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-633397 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-633397 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-633397 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-633397" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-633397
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-633397: (1.98539112s)
--- PASS: TestCertOptions (28.45s)

                                                
                                    
x
+
TestCertExpiration (232.09s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-383715 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-383715 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (24.721017623s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-383715 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-383715 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (25.412384455s)
helpers_test.go:175: Cleaning up "cert-expiration-383715" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-383715
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-383715: (1.953865256s)
--- PASS: TestCertExpiration (232.09s)

                                                
                                    
x
+
TestForceSystemdFlag (33.1s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-811473 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-811473 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (26.71217968s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-811473 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-811473" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-811473
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-811473: (6.112383611s)
--- PASS: TestForceSystemdFlag (33.10s)

                                                
                                    
x
+
TestForceSystemdEnv (34.05s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-020920 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-020920 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (31.485853544s)
helpers_test.go:175: Cleaning up "force-systemd-env-020920" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-020920
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-020920: (2.56444767s)
--- PASS: TestForceSystemdEnv (34.05s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.61s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.61s)

                                                
                                    
x
+
TestErrorSpam/start (0.58s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-113120 --log_dir /tmp/nospam-113120 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-113120 --log_dir /tmp/nospam-113120 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-113120 --log_dir /tmp/nospam-113120 start --dry-run
--- PASS: TestErrorSpam/start (0.58s)

                                                
                                    
x
+
TestErrorSpam/status (0.85s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-113120 --log_dir /tmp/nospam-113120 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-113120 --log_dir /tmp/nospam-113120 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-113120 --log_dir /tmp/nospam-113120 status
--- PASS: TestErrorSpam/status (0.85s)

                                                
                                    
x
+
TestErrorSpam/pause (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-113120 --log_dir /tmp/nospam-113120 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-113120 --log_dir /tmp/nospam-113120 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-113120 --log_dir /tmp/nospam-113120 pause
--- PASS: TestErrorSpam/pause (1.49s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-113120 --log_dir /tmp/nospam-113120 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-113120 --log_dir /tmp/nospam-113120 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-113120 --log_dir /tmp/nospam-113120 unpause
--- PASS: TestErrorSpam/unpause (1.48s)

                                                
                                    
x
+
TestErrorSpam/stop (1.36s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-113120 --log_dir /tmp/nospam-113120 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-113120 --log_dir /tmp/nospam-113120 stop: (1.194019876s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-113120 --log_dir /tmp/nospam-113120 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-113120 --log_dir /tmp/nospam-113120 stop
--- PASS: TestErrorSpam/stop (1.36s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/16890-138069/.minikube/files/etc/test/nested/copy/144822/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (40.25s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-387153 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-387153 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (40.249799844s)
--- PASS: TestFunctional/serial/StartWithProxy (40.25s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (42.73s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-387153 --alsologtostderr -v=8
E0717 18:52:37.171011  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/client.crt: no such file or directory
E0717 18:52:37.176633  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/client.crt: no such file or directory
E0717 18:52:37.186892  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/client.crt: no such file or directory
E0717 18:52:37.207174  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/client.crt: no such file or directory
E0717 18:52:37.247488  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/client.crt: no such file or directory
E0717 18:52:37.328307  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/client.crt: no such file or directory
E0717 18:52:37.488704  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/client.crt: no such file or directory
E0717 18:52:37.809821  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/client.crt: no such file or directory
E0717 18:52:38.450832  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/client.crt: no such file or directory
E0717 18:52:39.732058  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/client.crt: no such file or directory
E0717 18:52:42.292346  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/client.crt: no such file or directory
E0717 18:52:47.413523  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-387153 --alsologtostderr -v=8: (42.72410838s)
functional_test.go:659: soft start took 42.724822524s for "functional-387153" cluster.
--- PASS: TestFunctional/serial/SoftStart (42.73s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-387153 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.74s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-387153 /tmp/TestFunctionalserialCacheCmdcacheadd_local703484145/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 cache add minikube-local-cache-test:functional-387153
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 cache delete minikube-local-cache-test:functional-387153
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-387153
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.74s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-387153 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (262.899481ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 kubectl -- --context functional-387153 get pods
E0717 18:52:57.654248  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/client.crt: no such file or directory
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-387153 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.89s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-387153 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0717 18:53:18.135044  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-387153 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.894186562s)
functional_test.go:757: restart took 32.894318877s for "functional-387153" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.89s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-387153 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-387153 logs: (1.360134293s)
--- PASS: TestFunctional/serial/LogsCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 logs --file /tmp/TestFunctionalserialLogsFileCmd699927526/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-387153 logs --file /tmp/TestFunctionalserialLogsFileCmd699927526/001/logs.txt: (1.358509299s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.37s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-387153 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-387153
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-387153: exit status 115 (321.456905ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30688 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-387153 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.37s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-387153 config get cpus: exit status 14 (62.030559ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-387153 config get cpus: exit status 14 (50.783482ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-387153 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-387153 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 178160: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.06s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-387153 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-387153 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (204.263015ms)

                                                
                                                
-- stdout --
	* [functional-387153] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16890
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16890-138069/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-138069/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 18:53:52.540576  177675 out.go:296] Setting OutFile to fd 1 ...
	I0717 18:53:52.540703  177675 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 18:53:52.540716  177675 out.go:309] Setting ErrFile to fd 2...
	I0717 18:53:52.540722  177675 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 18:53:52.540947  177675 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-138069/.minikube/bin
	I0717 18:53:52.541509  177675 out.go:303] Setting JSON to false
	I0717 18:53:52.542468  177675 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":12984,"bootTime":1689607049,"procs":267,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:53:52.542542  177675 start.go:138] virtualization: kvm guest
	I0717 18:53:52.556890  177675 out.go:177] * [functional-387153] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 18:53:52.560214  177675 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 18:53:52.560150  177675 notify.go:220] Checking for updates...
	I0717 18:53:52.562890  177675 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:53:52.564576  177675 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-138069/kubeconfig
	I0717 18:53:52.566552  177675 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-138069/.minikube
	I0717 18:53:52.567838  177675 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 18:53:52.569607  177675 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 18:53:52.573550  177675 config.go:182] Loaded profile config "functional-387153": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 18:53:52.574210  177675 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 18:53:52.609853  177675 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 18:53:52.609952  177675 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 18:53:52.690181  177675 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:50 SystemTime:2023-07-17 18:53:52.677388047 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 18:53:52.690311  177675 docker.go:294] overlay module found
	I0717 18:53:52.694196  177675 out.go:177] * Using the docker driver based on existing profile
	I0717 18:53:52.695757  177675 start.go:298] selected driver: docker
	I0717 18:53:52.695770  177675 start.go:880] validating driver "docker" against &{Name:functional-387153 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-387153 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 18:53:52.695876  177675 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 18:53:52.698100  177675 out.go:177] 
	W0717 18:53:52.699571  177675 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0717 18:53:52.700989  177675 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-387153 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-387153 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-387153 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (161.662085ms)

                                                
                                                
-- stdout --
	* [functional-387153] minikube v1.30.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16890
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16890-138069/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-138069/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 18:53:53.022223  177916 out.go:296] Setting OutFile to fd 1 ...
	I0717 18:53:53.022349  177916 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 18:53:53.022361  177916 out.go:309] Setting ErrFile to fd 2...
	I0717 18:53:53.022365  177916 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 18:53:53.022756  177916 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-138069/.minikube/bin
	I0717 18:53:53.023308  177916 out.go:303] Setting JSON to false
	I0717 18:53:53.024492  177916 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":12984,"bootTime":1689607049,"procs":266,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 18:53:53.024564  177916 start.go:138] virtualization: kvm guest
	I0717 18:53:53.027180  177916 out.go:177] * [functional-387153] minikube v1.30.1 sur Ubuntu 20.04 (kvm/amd64)
	I0717 18:53:53.028997  177916 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 18:53:53.030608  177916 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 18:53:53.028958  177916 notify.go:220] Checking for updates...
	I0717 18:53:53.033876  177916 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-138069/kubeconfig
	I0717 18:53:53.035734  177916 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-138069/.minikube
	I0717 18:53:53.037295  177916 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 18:53:53.038948  177916 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 18:53:53.041621  177916 config.go:182] Loaded profile config "functional-387153": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 18:53:53.042982  177916 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 18:53:53.067542  177916 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 18:53:53.067625  177916 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 18:53:53.127739  177916 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:48 SystemTime:2023-07-17 18:53:53.119140261 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 18:53:53.127842  177916 docker.go:294] overlay module found
	I0717 18:53:53.130202  177916 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0717 18:53:53.131881  177916 start.go:298] selected driver: docker
	I0717 18:53:53.131901  177916 start.go:880] validating driver "docker" against &{Name:functional-387153 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-387153 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 18:53:53.132033  177916 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 18:53:53.134357  177916 out.go:177] 
	W0717 18:53:53.136071  177916 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0717 18:53:53.137641  177916 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-387153 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-387153 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6fb669fc84-spm94" [350565f1-29bb-4726-a662-0e51fe47c456] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-6fb669fc84-spm94" [350565f1-29bb-4726-a662-0e51fe47c456] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.006881572s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:30760
functional_test.go:1674: http://192.168.49.2:30760: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6fb669fc84-spm94

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30760
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.56s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [d1745c05-6b06-4600-b6d6-461d15675eb3] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.008691535s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-387153 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-387153 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-387153 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-387153 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [87d96ac4-2453-43e9-be98-2b104e1eb163] Pending
helpers_test.go:344: "sp-pod" [87d96ac4-2453-43e9-be98-2b104e1eb163] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [87d96ac4-2453-43e9-be98-2b104e1eb163] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.007316997s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-387153 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-387153 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-387153 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d2f76266-25ba-494e-b53d-9898902354b7] Pending
helpers_test.go:344: "sp-pod" [d2f76266-25ba-494e-b53d-9898902354b7] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d2f76266-25ba-494e-b53d-9898902354b7] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.01493521s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-387153 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.97s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 ssh -n functional-387153 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 cp functional-387153:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1915178121/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 ssh -n functional-387153 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-387153 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
2023/07/17 18:54:01 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "mysql-7db894d786-dtst7" [dc8453dc-1f8d-48a9-a096-7c9d5665f6bb] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-7db894d786-dtst7" [dc8453dc-1f8d-48a9-a096-7c9d5665f6bb] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.053192319s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-387153 exec mysql-7db894d786-dtst7 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-387153 exec mysql-7db894d786-dtst7 -- mysql -ppassword -e "show databases;": exit status 1 (196.987108ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-387153 exec mysql-7db894d786-dtst7 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-387153 exec mysql-7db894d786-dtst7 -- mysql -ppassword -e "show databases;": exit status 1 (135.972204ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-387153 exec mysql-7db894d786-dtst7 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.43s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/144822/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 ssh "sudo cat /etc/test/nested/copy/144822/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/144822.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 ssh "sudo cat /etc/ssl/certs/144822.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/144822.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 ssh "sudo cat /usr/share/ca-certificates/144822.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/1448222.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 ssh "sudo cat /etc/ssl/certs/1448222.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/1448222.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 ssh "sudo cat /usr/share/ca-certificates/1448222.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-387153 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-387153 ssh "sudo systemctl is-active docker": exit status 1 (280.22846ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-387153 ssh "sudo systemctl is-active containerd": exit status 1 (271.984817ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-387153 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-387153 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-775766b4cc-ggljq" [f06623e9-fa7b-487e-a5f5-f6dc303ff04a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-775766b4cc-ggljq" [f06623e9-fa7b-487e-a5f5-f6dc303ff04a] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.014178253s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.25s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-387153 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-387153 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-387153 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 174391: os: process already finished
helpers_test.go:502: unable to terminate pid 174091: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-387153 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-387153 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-387153 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [181b9907-2c83-4c41-aa8e-51584bae0fea] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [181b9907-2c83-4c41-aa8e-51584bae0fea] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.007707645s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-387153 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.214.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-387153 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 service list -o json
functional_test.go:1493: Took "506.637381ms" to run "out/minikube-linux-amd64 -p functional-387153 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:32204
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-387153 /tmp/TestFunctionalparallelMountCmdany-port707136085/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1689620029081882551" to /tmp/TestFunctionalparallelMountCmdany-port707136085/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1689620029081882551" to /tmp/TestFunctionalparallelMountCmdany-port707136085/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1689620029081882551" to /tmp/TestFunctionalparallelMountCmdany-port707136085/001/test-1689620029081882551
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-387153 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (274.904969ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 17 18:53 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 17 18:53 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 17 18:53 test-1689620029081882551
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 ssh cat /mount-9p/test-1689620029081882551
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-387153 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [a9f1a2cb-f286-4029-9964-80d598678a8e] Pending
helpers_test.go:344: "busybox-mount" [a9f1a2cb-f286-4029-9964-80d598678a8e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [a9f1a2cb-f286-4029-9964-80d598678a8e] Running
helpers_test.go:344: "busybox-mount" [a9f1a2cb-f286-4029-9964-80d598678a8e] Running: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [a9f1a2cb-f286-4029-9964-80d598678a8e] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.007194491s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-387153 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-387153 /tmp/TestFunctionalparallelMountCmdany-port707136085/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "306.314205ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "46.833823ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "282.14769ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "57.654864ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:32204
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-amd64 -p functional-387153 version -o=json --components: (1.544064912s)
--- PASS: TestFunctional/parallel/Version/components (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-387153 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.3
registry.k8s.io/kube-proxy:v1.27.3
registry.k8s.io/kube-controller-manager:v1.27.3
registry.k8s.io/kube-apiserver:v1.27.3
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-387153
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-387153 image ls --format short --alsologtostderr:
I0717 18:54:17.135870  181540 out.go:296] Setting OutFile to fd 1 ...
I0717 18:54:17.136044  181540 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 18:54:17.136055  181540 out.go:309] Setting ErrFile to fd 2...
I0717 18:54:17.136061  181540 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 18:54:17.136260  181540 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-138069/.minikube/bin
I0717 18:54:17.136838  181540 config.go:182] Loaded profile config "functional-387153": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 18:54:17.136940  181540 config.go:182] Loaded profile config "functional-387153": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 18:54:17.137300  181540 cli_runner.go:164] Run: docker container inspect functional-387153 --format={{.State.Status}}
I0717 18:54:17.156373  181540 ssh_runner.go:195] Run: systemctl --version
I0717 18:54:17.156428  181540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-387153
I0717 18:54:17.184632  181540 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/functional-387153/id_rsa Username:docker}
I0717 18:54:17.276752  181540 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-387153 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager | v1.27.3            | 7cffc01dba0e1 | 114MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.7-0            | 86b6af7dd652c | 297MB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/nginx                 | alpine             | 4937520ae206c | 43.2MB |
| docker.io/library/nginx                 | latest             | 021283c8eb95b | 191MB  |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/kube-apiserver          | v1.27.3            | 08a0c939e61b7 | 122MB  |
| registry.k8s.io/kube-proxy              | v1.27.3            | 5780543258cf0 | 72.7MB |
| docker.io/library/mysql                 | 5.7                | 2be84dd575ee2 | 588MB  |
| gcr.io/google-containers/addon-resizer  | functional-387153  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/kube-scheduler          | v1.27.3            | 41697ceeb70b3 | 59.8MB |
| docker.io/kindest/kindnetd              | v20230511-dc714da8 | b0b1fa0f58c6e | 65.2MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-387153 image ls --format table --alsologtostderr:
I0717 18:54:17.386974  181706 out.go:296] Setting OutFile to fd 1 ...
I0717 18:54:17.387133  181706 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 18:54:17.387147  181706 out.go:309] Setting ErrFile to fd 2...
I0717 18:54:17.387155  181706 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 18:54:17.387470  181706 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-138069/.minikube/bin
I0717 18:54:17.388378  181706 config.go:182] Loaded profile config "functional-387153": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 18:54:17.388552  181706 config.go:182] Loaded profile config "functional-387153": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 18:54:17.389140  181706 cli_runner.go:164] Run: docker container inspect functional-387153 --format={{.State.Status}}
I0717 18:54:17.410736  181706 ssh_runner.go:195] Run: systemctl --version
I0717 18:54:17.410779  181706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-387153
I0717 18:54:17.428794  181706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/functional-387153/id_rsa Username:docker}
I0717 18:54:17.520208  181706 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-387153 image ls --format json --alsologtostderr:
[{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb","registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0"],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.3"],"size":"122065872"},{"id":"7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e","registry.k8s.io/kube-controller-manager@sha25
6:d3bdc20876edfaa4894cf8464dc98592385a43cbc033b37846dccc2460c7bc06"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.3"],"size":"113919286"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"2be84dd575ee2ecdb186dc43a9cd951890a764d2cefbd31a72cdf4410c43a2d0","repoDigests":["docker.io/library/mysql@sha256:03b6dcedf5a2754da00e119e2cc6094ed3c884ad36b67bb25fe67be4b4f9bdb1","docker.io/library/mysql@sha256:bd873931ef20f30a5a9bf71498ce4e02c88cf48b2e8b782c337076d814deebde"],"repoTags":["docker.io/library/mysql:5.7"],"size":"588268197"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ec
d061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-387153"],"size":"34114467"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681","repoDigests":["registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83","registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9"],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"297083935"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k
8s.io/pause:3.3"],"size":"686139"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da","repoDigests":["docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974","docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"65249302"},{"id":"021283c8eb95be02b23db0de7f
609d603553c6714785e7a673c6594a624ffbda","repoDigests":["docker.io/library/nginx@sha256:08bc36ad52474e528cc1ea3426b5e3f4bad8a130318e3140d6cfe29c8892c7ef","docker.io/library/nginx@sha256:1bb5c4b86cb7c1e9f0209611dc2135d8a2c1c3a6436163970c99193787d067ea"],"repoTags":["docker.io/library/nginx:latest"],"size":"191044865"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c","repoDigests":["registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f","registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699"],"repoTags":["registry.k8s.io/kube-proxy:v1.27.3"],"size":"72713623"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["do
cker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"4937520ae206c8969734d9a659fc1e6594d9b22b9340bf0796defbea0c92dd02","repoDigests":["docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6","docker.io/library/nginx@sha256:2d4efe74ef541248b0a70838c557de04509d1115dec6bfc21ad0d66e41574a8a"],"repoTags":["docker.io/library/nginx:alpine"],"size":"43220780"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"41697ceeb70b3f49e54ed46f2cf27ac5b3a
201a7d9668ca327588b23fafdf36a","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082","registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.3"],"size":"59811126"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-387153 image ls --format json --alsologtostderr:
I0717 18:54:17.374635  181700 out.go:296] Setting OutFile to fd 1 ...
I0717 18:54:17.374790  181700 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 18:54:17.374803  181700 out.go:309] Setting ErrFile to fd 2...
I0717 18:54:17.374809  181700 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 18:54:17.375137  181700 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-138069/.minikube/bin
I0717 18:54:17.375958  181700 config.go:182] Loaded profile config "functional-387153": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 18:54:17.376128  181700 config.go:182] Loaded profile config "functional-387153": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 18:54:17.376721  181700 cli_runner.go:164] Run: docker container inspect functional-387153 --format={{.State.Status}}
I0717 18:54:17.399893  181700 ssh_runner.go:195] Run: systemctl --version
I0717 18:54:17.399948  181700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-387153
I0717 18:54:17.422310  181700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/functional-387153/id_rsa Username:docker}
I0717 18:54:17.516392  181700 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-387153 image ls --format yaml --alsologtostderr:
- id: 2be84dd575ee2ecdb186dc43a9cd951890a764d2cefbd31a72cdf4410c43a2d0
repoDigests:
- docker.io/library/mysql@sha256:03b6dcedf5a2754da00e119e2cc6094ed3c884ad36b67bb25fe67be4b4f9bdb1
- docker.io/library/mysql@sha256:bd873931ef20f30a5a9bf71498ce4e02c88cf48b2e8b782c337076d814deebde
repoTags:
- docker.io/library/mysql:5.7
size: "588268197"
- id: 021283c8eb95be02b23db0de7f609d603553c6714785e7a673c6594a624ffbda
repoDigests:
- docker.io/library/nginx@sha256:08bc36ad52474e528cc1ea3426b5e3f4bad8a130318e3140d6cfe29c8892c7ef
- docker.io/library/nginx@sha256:1bb5c4b86cb7c1e9f0209611dc2135d8a2c1c3a6436163970c99193787d067ea
repoTags:
- docker.io/library/nginx:latest
size: "191044865"
- id: 7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e
- registry.k8s.io/kube-controller-manager@sha256:d3bdc20876edfaa4894cf8464dc98592385a43cbc033b37846dccc2460c7bc06
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.3
size: "113919286"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da
repoDigests:
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
- docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "65249302"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-387153
size: "34114467"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 4937520ae206c8969734d9a659fc1e6594d9b22b9340bf0796defbea0c92dd02
repoDigests:
- docker.io/library/nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6
- docker.io/library/nginx@sha256:2d4efe74ef541248b0a70838c557de04509d1115dec6bfc21ad0d66e41574a8a
repoTags:
- docker.io/library/nginx:alpine
size: "43220780"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681
repoDigests:
- registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83
- registry.k8s.io/etcd@sha256:8ae03c7bbd43d5c301eea33a39ac5eda2964f826050cb2ccf3486f18917590c9
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "297083935"
- id: 08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e4d78564d3ce7ab34940eacc61c90d035cb8a6335552c9380eaff474e791ccbb
- registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.3
size: "122065872"
- id: 5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c
repoDigests:
- registry.k8s.io/kube-proxy@sha256:091c9fe8428334e2451a0e5d214d40c415f2e0d0861794ee941f48003726570f
- registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699
repoTags:
- registry.k8s.io/kube-proxy:v1.27.3
size: "72713623"
- id: 41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2b43d8f86e9fdc96a38743ab2b6efffd8b63d189f2c41e5de0f8deb8a8d0e082
- registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.3
size: "59811126"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-387153 image ls --format yaml --alsologtostderr:
I0717 18:54:17.137636  181542 out.go:296] Setting OutFile to fd 1 ...
I0717 18:54:17.137794  181542 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 18:54:17.137803  181542 out.go:309] Setting ErrFile to fd 2...
I0717 18:54:17.137807  181542 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 18:54:17.138016  181542 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-138069/.minikube/bin
I0717 18:54:17.138562  181542 config.go:182] Loaded profile config "functional-387153": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 18:54:17.138677  181542 config.go:182] Loaded profile config "functional-387153": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 18:54:17.139061  181542 cli_runner.go:164] Run: docker container inspect functional-387153 --format={{.State.Status}}
I0717 18:54:17.156828  181542 ssh_runner.go:195] Run: systemctl --version
I0717 18:54:17.156889  181542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-387153
I0717 18:54:17.182362  181542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/functional-387153/id_rsa Username:docker}
I0717 18:54:17.276555  181542 ssh_runner.go:195] Run: sudo crictl images --output json
W0717 18:54:17.323765  181542 root.go:91] failed to log command end to audit: failed to find a log row with id equals to 86817b20-2ff6-49c7-b180-65d81e3479d1
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-387153 ssh pgrep buildkitd: exit status 1 (280.548355ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 image build -t localhost/my-image:functional-387153 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-387153 image build -t localhost/my-image:functional-387153 testdata/build --alsologtostderr: (1.522346745s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-387153 image build -t localhost/my-image:functional-387153 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 2569d301a84
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-387153
--> 92d6f0298df
Successfully tagged localhost/my-image:functional-387153
92d6f0298df493aa8e8399639f02cac0b41d80a61aeed07651ff05dfe7e4642a
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-387153 image build -t localhost/my-image:functional-387153 testdata/build --alsologtostderr:
I0717 18:54:17.412522  181724 out.go:296] Setting OutFile to fd 1 ...
I0717 18:54:17.412681  181724 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 18:54:17.412694  181724 out.go:309] Setting ErrFile to fd 2...
I0717 18:54:17.412701  181724 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 18:54:17.412985  181724 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-138069/.minikube/bin
I0717 18:54:17.413772  181724 config.go:182] Loaded profile config "functional-387153": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 18:54:17.414318  181724 config.go:182] Loaded profile config "functional-387153": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
I0717 18:54:17.414697  181724 cli_runner.go:164] Run: docker container inspect functional-387153 --format={{.State.Status}}
I0717 18:54:17.434179  181724 ssh_runner.go:195] Run: systemctl --version
I0717 18:54:17.434240  181724 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-387153
I0717 18:54:17.451439  181724 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/functional-387153/id_rsa Username:docker}
I0717 18:54:17.540555  181724 build_images.go:151] Building image from path: /tmp/build.4245814139.tar
I0717 18:54:17.540626  181724 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0717 18:54:17.562690  181724 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4245814139.tar
I0717 18:54:17.566616  181724 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4245814139.tar: stat -c "%s %y" /var/lib/minikube/build/build.4245814139.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4245814139.tar': No such file or directory
I0717 18:54:17.566653  181724 ssh_runner.go:362] scp /tmp/build.4245814139.tar --> /var/lib/minikube/build/build.4245814139.tar (3072 bytes)
I0717 18:54:17.592298  181724 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4245814139
I0717 18:54:17.600235  181724 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4245814139 -xf /var/lib/minikube/build/build.4245814139.tar
I0717 18:54:17.609349  181724 crio.go:297] Building image: /var/lib/minikube/build/build.4245814139
I0717 18:54:17.609404  181724 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-387153 /var/lib/minikube/build/build.4245814139 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0717 18:54:18.867698  181724 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-387153 /var/lib/minikube/build/build.4245814139 --cgroup-manager=cgroupfs: (1.258260197s)
I0717 18:54:18.867760  181724 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4245814139
I0717 18:54:18.876484  181724 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4245814139.tar
I0717 18:54:18.884077  181724 build_images.go:207] Built localhost/my-image:functional-387153 from /tmp/build.4245814139.tar
I0717 18:54:18.884105  181724 build_images.go:123] succeeded building to: functional-387153
I0717 18:54:18.884109  181724 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-387153
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 image load --daemon gcr.io/google-containers/addon-resizer:functional-387153 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-387153 image load --daemon gcr.io/google-containers/addon-resizer:functional-387153 --alsologtostderr: (4.966872299s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-387153 /tmp/TestFunctionalparallelMountCmdspecific-port503260066/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-387153 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (334.949508ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-387153 /tmp/TestFunctionalparallelMountCmdspecific-port503260066/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-387153 ssh "sudo umount -f /mount-9p": exit status 1 (290.733996ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-387153 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-387153 /tmp/TestFunctionalparallelMountCmdspecific-port503260066/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 image load --daemon gcr.io/google-containers/addon-resizer:functional-387153 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-387153 image load --daemon gcr.io/google-containers/addon-resizer:functional-387153 --alsologtostderr: (3.159538262s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-387153 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1885789063/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-387153 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1885789063/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-387153 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1885789063/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-387153 ssh "findmnt -T" /mount1: exit status 1 (472.493827ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
E0717 18:53:59.096013  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/client.crt: no such file or directory
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-387153 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-387153 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1885789063/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-387153 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1885789063/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-387153 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1885789063/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 image save gcr.io/google-containers/addon-resizer:functional-387153 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-387153 image save gcr.io/google-containers/addon-resizer:functional-387153 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.823550703s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 image rm gcr.io/google-containers/addon-resizer:functional-387153 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-387153
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-387153 image save --daemon gcr.io/google-containers/addon-resizer:functional-387153 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-387153 image save --daemon gcr.io/google-containers/addon-resizer:functional-387153 --alsologtostderr: (2.152257735s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-387153
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.19s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-387153
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-387153
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-387153
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (67.39s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-795879 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0717 18:55:21.016477  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-795879 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m7.38529275s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (67.39s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (9.29s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-795879 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-795879 addons enable ingress --alsologtostderr -v=5: (9.287056842s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (9.29s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.56s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-795879 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.56s)

                                                
                                    
x
+
TestJSONOutput/start/Command (69.35s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-250009 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0717 18:58:48.288735  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/functional-387153/client.crt: no such file or directory
E0717 18:58:58.529720  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/functional-387153/client.crt: no such file or directory
E0717 18:59:19.010827  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/functional-387153/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-250009 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m9.349226922s)
--- PASS: TestJSONOutput/start/Command (69.35s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-250009 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-250009 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.69s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-250009 --output=json --user=testUser
E0717 18:59:59.972187  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/functional-387153/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-250009 --output=json --user=testUser: (5.68678299s)
--- PASS: TestJSONOutput/stop/Command (5.69s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-403583 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-403583 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (68.11263ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cdd4d214-b6e5-4440-9500-b87e60d57f56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-403583] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bd20666b-b453-4778-8095-b5594253feeb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16890"}}
	{"specversion":"1.0","id":"1127d899-3385-4305-ba1c-a4d950b634a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"11433eb4-b11e-4e16-8000-c470017fddc6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16890-138069/kubeconfig"}}
	{"specversion":"1.0","id":"a5dea4e4-083d-4195-83e7-33ba400cafd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-138069/.minikube"}}
	{"specversion":"1.0","id":"4e7749f7-8681-4ca2-8cd7-927cf732db54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"016f22e8-f874-4e3e-8140-78bbd2a68d7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b8b1c473-d5a1-4917-a026-bb37d5843043","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-403583" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-403583
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (32.89s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-855730 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-855730 --network=: (30.824638189s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-855730" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-855730
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-855730: (2.043985963s)
--- PASS: TestKicCustomNetwork/create_custom_network (32.89s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.74s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-530525 --network=bridge
E0717 19:00:42.944150  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/client.crt: no such file or directory
E0717 19:00:42.949484  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/client.crt: no such file or directory
E0717 19:00:42.959809  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/client.crt: no such file or directory
E0717 19:00:42.980166  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/client.crt: no such file or directory
E0717 19:00:43.020541  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/client.crt: no such file or directory
E0717 19:00:43.100926  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/client.crt: no such file or directory
E0717 19:00:43.261396  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/client.crt: no such file or directory
E0717 19:00:43.582012  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/client.crt: no such file or directory
E0717 19:00:44.223022  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/client.crt: no such file or directory
E0717 19:00:45.503776  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/client.crt: no such file or directory
E0717 19:00:48.064262  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/client.crt: no such file or directory
E0717 19:00:53.184483  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/client.crt: no such file or directory
E0717 19:01:03.424802  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-530525 --network=bridge: (22.794498796s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-530525" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-530525
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-530525: (1.926931914s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.74s)

                                                
                                    
x
+
TestKicExistingNetwork (23.99s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-787955 --network=existing-network
E0717 19:01:21.893306  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/functional-387153/client.crt: no such file or directory
E0717 19:01:23.905119  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-787955 --network=existing-network: (21.978183725s)
helpers_test.go:175: Cleaning up "existing-network-787955" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-787955
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-787955: (1.87537313s)
--- PASS: TestKicExistingNetwork (23.99s)

                                                
                                    
x
+
TestKicCustomSubnet (27.64s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-655223 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-655223 --subnet=192.168.60.0/24: (25.679594507s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-655223 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-655223" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-655223
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-655223: (1.947053864s)
--- PASS: TestKicCustomSubnet (27.64s)

                                                
                                    
x
+
TestKicStaticIP (25.84s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-061107 --static-ip=192.168.200.200
E0717 19:02:04.866875  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-061107 --static-ip=192.168.200.200: (23.763328965s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-061107 ip
helpers_test.go:175: Cleaning up "static-ip-061107" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-061107
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-061107: (1.956399016s)
--- PASS: TestKicStaticIP (25.84s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (50.59s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-141846 --driver=docker  --container-runtime=crio
E0717 19:02:37.170441  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-141846 --driver=docker  --container-runtime=crio: (20.803935865s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-145583 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-145583 --driver=docker  --container-runtime=crio: (24.868807957s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-141846
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-145583
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-145583" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-145583
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-145583: (1.808486716s)
helpers_test.go:175: Cleaning up "first-141846" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-141846
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-141846: (2.145975713s)
--- PASS: TestMinikubeProfile (50.59s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.02s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-830426 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-830426 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.022453526s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.02s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-830426 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.46s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-848583 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-848583 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.463826604s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-848583 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.6s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-830426 --alsologtostderr -v=5
E0717 19:03:26.787930  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/client.crt: no such file or directory
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-830426 --alsologtostderr -v=5: (1.600243084s)
--- PASS: TestMountStart/serial/DeleteFirst (1.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-848583 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-848583
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-848583: (1.186852664s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.77s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-848583
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-848583: (5.768728289s)
--- PASS: TestMountStart/serial/RestartStopped (6.77s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-848583 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (87.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-549411 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0717 19:03:38.046943  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/functional-387153/client.crt: no such file or directory
E0717 19:04:05.733783  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/functional-387153/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-549411 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m27.495003466s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (87.93s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (2.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-549411 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-549411 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-549411 -- rollout status deployment/busybox: (1.287927856s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-549411 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-549411 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-549411 -- exec busybox-67b7f59bb-8mh6q -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-549411 -- exec busybox-67b7f59bb-rww5s -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-549411 -- exec busybox-67b7f59bb-8mh6q -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-549411 -- exec busybox-67b7f59bb-rww5s -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-549411 -- exec busybox-67b7f59bb-8mh6q -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-549411 -- exec busybox-67b7f59bb-rww5s -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (2.95s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (21.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-549411 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-549411 -v 3 --alsologtostderr: (20.428325277s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (21.01s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 cp testdata/cp-test.txt multinode-549411:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 ssh -n multinode-549411 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 cp multinode-549411:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4202692518/001/cp-test_multinode-549411.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 ssh -n multinode-549411 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 cp multinode-549411:/home/docker/cp-test.txt multinode-549411-m02:/home/docker/cp-test_multinode-549411_multinode-549411-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 ssh -n multinode-549411 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 ssh -n multinode-549411-m02 "sudo cat /home/docker/cp-test_multinode-549411_multinode-549411-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 cp multinode-549411:/home/docker/cp-test.txt multinode-549411-m03:/home/docker/cp-test_multinode-549411_multinode-549411-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 ssh -n multinode-549411 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 ssh -n multinode-549411-m03 "sudo cat /home/docker/cp-test_multinode-549411_multinode-549411-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 cp testdata/cp-test.txt multinode-549411-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 ssh -n multinode-549411-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 cp multinode-549411-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4202692518/001/cp-test_multinode-549411-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 ssh -n multinode-549411-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 cp multinode-549411-m02:/home/docker/cp-test.txt multinode-549411:/home/docker/cp-test_multinode-549411-m02_multinode-549411.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 ssh -n multinode-549411-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 ssh -n multinode-549411 "sudo cat /home/docker/cp-test_multinode-549411-m02_multinode-549411.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 cp multinode-549411-m02:/home/docker/cp-test.txt multinode-549411-m03:/home/docker/cp-test_multinode-549411-m02_multinode-549411-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 ssh -n multinode-549411-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 ssh -n multinode-549411-m03 "sudo cat /home/docker/cp-test_multinode-549411-m02_multinode-549411-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 cp testdata/cp-test.txt multinode-549411-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 ssh -n multinode-549411-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 cp multinode-549411-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4202692518/001/cp-test_multinode-549411-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 ssh -n multinode-549411-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 cp multinode-549411-m03:/home/docker/cp-test.txt multinode-549411:/home/docker/cp-test_multinode-549411-m03_multinode-549411.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 ssh -n multinode-549411-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 ssh -n multinode-549411 "sudo cat /home/docker/cp-test_multinode-549411-m03_multinode-549411.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 cp multinode-549411-m03:/home/docker/cp-test.txt multinode-549411-m02:/home/docker/cp-test_multinode-549411-m03_multinode-549411-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 ssh -n multinode-549411-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 ssh -n multinode-549411-m02 "sudo cat /home/docker/cp-test_multinode-549411-m03_multinode-549411-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.81s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-549411 node stop m03: (1.18680865s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 status
E0717 19:05:42.943682  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/client.crt: no such file or directory
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-549411 status: exit status 7 (453.567834ms)

                                                
                                                
-- stdout --
	multinode-549411
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-549411-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-549411-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-549411 status --alsologtostderr: exit status 7 (448.48291ms)

                                                
                                                
-- stdout --
	multinode-549411
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-549411-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-549411-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 19:05:43.216825  241227 out.go:296] Setting OutFile to fd 1 ...
	I0717 19:05:43.216951  241227 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:05:43.216960  241227 out.go:309] Setting ErrFile to fd 2...
	I0717 19:05:43.216964  241227 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:05:43.217170  241227 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-138069/.minikube/bin
	I0717 19:05:43.217334  241227 out.go:303] Setting JSON to false
	I0717 19:05:43.217361  241227 mustload.go:65] Loading cluster: multinode-549411
	I0717 19:05:43.217478  241227 notify.go:220] Checking for updates...
	I0717 19:05:43.217700  241227 config.go:182] Loaded profile config "multinode-549411": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:05:43.217714  241227 status.go:255] checking status of multinode-549411 ...
	I0717 19:05:43.218843  241227 cli_runner.go:164] Run: docker container inspect multinode-549411 --format={{.State.Status}}
	I0717 19:05:43.236641  241227 status.go:330] multinode-549411 host status = "Running" (err=<nil>)
	I0717 19:05:43.236684  241227 host.go:66] Checking if "multinode-549411" exists ...
	I0717 19:05:43.236939  241227 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-549411
	I0717 19:05:43.253213  241227 host.go:66] Checking if "multinode-549411" exists ...
	I0717 19:05:43.253522  241227 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 19:05:43.253572  241227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-549411
	I0717 19:05:43.270191  241227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/multinode-549411/id_rsa Username:docker}
	I0717 19:05:43.361181  241227 ssh_runner.go:195] Run: systemctl --version
	I0717 19:05:43.365059  241227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:05:43.376057  241227 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 19:05:43.428712  241227 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:56 SystemTime:2023-07-17 19:05:43.42018337 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 19:05:43.429288  241227 kubeconfig.go:92] found "multinode-549411" server: "https://192.168.58.2:8443"
	I0717 19:05:43.429311  241227 api_server.go:166] Checking apiserver status ...
	I0717 19:05:43.429362  241227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 19:05:43.439704  241227 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1422/cgroup
	I0717 19:05:43.448005  241227 api_server.go:182] apiserver freezer: "7:freezer:/docker/45cef728eef070a7d16b710f7f2faee4f9d97e87c3d1ccb69b7e1c7b3c92a882/crio/crio-98f5a2770bd6ce0b2127196c123d3d2a8a19411535ea1ab32bb2615c70cd01b5"
	I0717 19:05:43.448063  241227 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/45cef728eef070a7d16b710f7f2faee4f9d97e87c3d1ccb69b7e1c7b3c92a882/crio/crio-98f5a2770bd6ce0b2127196c123d3d2a8a19411535ea1ab32bb2615c70cd01b5/freezer.state
	I0717 19:05:43.455418  241227 api_server.go:204] freezer state: "THAWED"
	I0717 19:05:43.455446  241227 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0717 19:05:43.459708  241227 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0717 19:05:43.459735  241227 status.go:421] multinode-549411 apiserver status = Running (err=<nil>)
	I0717 19:05:43.459747  241227 status.go:257] multinode-549411 status: &{Name:multinode-549411 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 19:05:43.459763  241227 status.go:255] checking status of multinode-549411-m02 ...
	I0717 19:05:43.460071  241227 cli_runner.go:164] Run: docker container inspect multinode-549411-m02 --format={{.State.Status}}
	I0717 19:05:43.476810  241227 status.go:330] multinode-549411-m02 host status = "Running" (err=<nil>)
	I0717 19:05:43.476841  241227 host.go:66] Checking if "multinode-549411-m02" exists ...
	I0717 19:05:43.477119  241227 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-549411-m02
	I0717 19:05:43.493383  241227 host.go:66] Checking if "multinode-549411-m02" exists ...
	I0717 19:05:43.493683  241227 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 19:05:43.493733  241227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-549411-m02
	I0717 19:05:43.510638  241227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/16890-138069/.minikube/machines/multinode-549411-m02/id_rsa Username:docker}
	I0717 19:05:43.596911  241227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 19:05:43.607261  241227 status.go:257] multinode-549411-m02 status: &{Name:multinode-549411-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0717 19:05:43.607296  241227 status.go:255] checking status of multinode-549411-m03 ...
	I0717 19:05:43.607625  241227 cli_runner.go:164] Run: docker container inspect multinode-549411-m03 --format={{.State.Status}}
	I0717 19:05:43.623793  241227 status.go:330] multinode-549411-m03 host status = "Stopped" (err=<nil>)
	I0717 19:05:43.623822  241227 status.go:343] host is not running, skipping remaining checks
	I0717 19:05:43.623830  241227 status.go:257] multinode-549411-m03 status: &{Name:multinode-549411-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.09s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-549411 node start m03 --alsologtostderr: (9.908765236s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.58s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (115.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-549411
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-549411
E0717 19:06:10.629767  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-549411: (24.849021691s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-549411 --wait=true -v=8 --alsologtostderr
E0717 19:07:37.170885  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-549411 --wait=true -v=8 --alsologtostderr: (1m30.234834673s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-549411
--- PASS: TestMultiNode/serial/RestartKeepsNodes (115.18s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-549411 node delete m03: (4.059676013s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.63s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-549411 stop: (23.652221182s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-549411 status: exit status 7 (82.746823ms)

                                                
                                                
-- stdout --
	multinode-549411
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-549411-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-549411 status --alsologtostderr: exit status 7 (81.926622ms)

                                                
                                                
-- stdout --
	multinode-549411
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-549411-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 19:08:17.786128  251439 out.go:296] Setting OutFile to fd 1 ...
	I0717 19:08:17.786286  251439 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:08:17.786300  251439 out.go:309] Setting ErrFile to fd 2...
	I0717 19:08:17.786307  251439 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:08:17.786531  251439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-138069/.minikube/bin
	I0717 19:08:17.786706  251439 out.go:303] Setting JSON to false
	I0717 19:08:17.786733  251439 mustload.go:65] Loading cluster: multinode-549411
	I0717 19:08:17.786777  251439 notify.go:220] Checking for updates...
	I0717 19:08:17.788062  251439 config.go:182] Loaded profile config "multinode-549411": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:08:17.788142  251439 status.go:255] checking status of multinode-549411 ...
	I0717 19:08:17.789035  251439 cli_runner.go:164] Run: docker container inspect multinode-549411 --format={{.State.Status}}
	I0717 19:08:17.810524  251439 status.go:330] multinode-549411 host status = "Stopped" (err=<nil>)
	I0717 19:08:17.810547  251439 status.go:343] host is not running, skipping remaining checks
	I0717 19:08:17.810556  251439 status.go:257] multinode-549411 status: &{Name:multinode-549411 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 19:08:17.810591  251439 status.go:255] checking status of multinode-549411-m02 ...
	I0717 19:08:17.810920  251439 cli_runner.go:164] Run: docker container inspect multinode-549411-m02 --format={{.State.Status}}
	I0717 19:08:17.826648  251439 status.go:330] multinode-549411-m02 host status = "Stopped" (err=<nil>)
	I0717 19:08:17.826670  251439 status.go:343] host is not running, skipping remaining checks
	I0717 19:08:17.826679  251439 status.go:257] multinode-549411-m02 status: &{Name:multinode-549411-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.82s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (77.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-549411 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0717 19:08:38.046752  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/functional-387153/client.crt: no such file or directory
E0717 19:09:00.218747  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-549411 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m17.133683657s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-549411 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (77.73s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (22.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-549411
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-549411-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-549411-m02 --driver=docker  --container-runtime=crio: exit status 14 (69.838792ms)

                                                
                                                
-- stdout --
	* [multinode-549411-m02] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16890
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16890-138069/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-138069/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-549411-m02' is duplicated with machine name 'multinode-549411-m02' in profile 'multinode-549411'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-549411-m03 --driver=docker  --container-runtime=crio
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-549411-m03 --driver=docker  --container-runtime=crio: (20.657274533s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-549411
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-549411: exit status 80 (268.158951ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-549411
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-549411-m03 already exists in multinode-549411-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-549411-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-549411-m03: (1.823991781s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (22.86s)

                                                
                                    
x
+
TestPreload (143.72s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-733433 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0717 19:10:42.943773  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-733433 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m5.559269402s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-733433 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-733433
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-733433: (5.653169015s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-733433 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-733433 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m9.168669909s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-733433 image list
helpers_test.go:175: Cleaning up "test-preload-733433" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-733433
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-733433: (2.239874461s)
--- PASS: TestPreload (143.72s)

                                                
                                    
x
+
TestScheduledStopUnix (97.47s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-097539 --memory=2048 --driver=docker  --container-runtime=crio
E0717 19:12:37.170007  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-097539 --memory=2048 --driver=docker  --container-runtime=crio: (21.70447889s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-097539 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-097539 -n scheduled-stop-097539
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-097539 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-097539 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-097539 -n scheduled-stop-097539
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-097539
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-097539 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0717 19:13:38.047106  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/functional-387153/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-097539
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-097539: exit status 7 (61.19439ms)

                                                
                                                
-- stdout --
	scheduled-stop-097539
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-097539 -n scheduled-stop-097539
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-097539 -n scheduled-stop-097539: exit status 7 (60.481322ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-097539" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-097539
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-097539: (4.490032099s)
--- PASS: TestScheduledStopUnix (97.47s)

                                                
                                    
x
+
TestInsufficientStorage (12.78s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-088527 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-088527 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.495270347s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0bf60e4a-afa1-4ea9-8204-a077f80f98e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-088527] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7ec6154b-d84b-490f-aee5-e49f671a302a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16890"}}
	{"specversion":"1.0","id":"4cdb05a5-73e7-4e91-84d8-365bfb08759a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b406d7b4-c587-451b-bec9-1e995f39b83c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16890-138069/kubeconfig"}}
	{"specversion":"1.0","id":"47b8bf54-7c97-4fda-aece-a91ce7de48bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-138069/.minikube"}}
	{"specversion":"1.0","id":"1ead24e1-c4b1-48fa-ac43-2bd02d0a004c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"fc53c4f0-2c26-4a96-8319-f4b9d9cbff9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2d20a4e4-e95e-429e-a992-fc9187106252","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"f91f4a82-eeb8-4023-9b13-625e9ff2299c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"b3ce7e01-33e5-4149-8d06-568cf73e7d87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"9923f32c-10ed-4f3e-b0ba-aa6a9f55df8b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"7f1ba66e-0003-44c9-9200-1cd104f02220","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-088527 in cluster insufficient-storage-088527","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"16fbcbf5-9237-4091-b401-4ee27d6cc5d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"7630d927-47c9-4442-b6dd-6c8fad983724","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"f9d5cfa5-db4f-4f9e-8428-561493733749","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-088527 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-088527 --output=json --layout=cluster: exit status 7 (251.698952ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-088527","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.30.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-088527","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 19:14:16.002314  273267 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-088527" does not appear in /home/jenkins/minikube-integration/16890-138069/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-088527 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-088527 --output=json --layout=cluster: exit status 7 (248.207106ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-088527","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.30.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-088527","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 19:14:16.250828  273354 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-088527" does not appear in /home/jenkins/minikube-integration/16890-138069/kubeconfig
	E0717 19:14:16.260585  273354 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/insufficient-storage-088527/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-088527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-088527
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-088527: (1.787881047s)
--- PASS: TestInsufficientStorage (12.78s)

                                                
                                    
x
+
TestKubernetesUpgrade (349.91s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-677764 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-677764 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (51.903470756s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-677764
version_upgrade_test.go:239: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-677764: (1.207726989s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-677764 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-677764 status --format={{.Host}}: exit status 7 (72.944017ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-677764 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:255: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-677764 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m28.066854888s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-677764 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-677764 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-677764 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (69.215497ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-677764] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16890
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16890-138069/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-138069/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.27.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-677764
	    minikube start -p kubernetes-upgrade-677764 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6777642 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.27.3, by running:
	    
	    minikube start -p kubernetes-upgrade-677764 --kubernetes-version=v1.27.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-677764 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:287: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-677764 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.355246176s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-677764" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-677764
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-677764: (2.163248232s)
--- PASS: TestKubernetesUpgrade (349.91s)

                                                
                                    
x
+
TestMissingContainerUpgrade (124.4s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:321: (dbg) Run:  /tmp/minikube-v1.9.0.117810316.exe start -p missing-upgrade-629154 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:321: (dbg) Done: /tmp/minikube-v1.9.0.117810316.exe start -p missing-upgrade-629154 --memory=2200 --driver=docker  --container-runtime=crio: (1m5.221965612s)
version_upgrade_test.go:330: (dbg) Run:  docker stop missing-upgrade-629154
version_upgrade_test.go:335: (dbg) Run:  docker rm missing-upgrade-629154
version_upgrade_test.go:341: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-629154 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0717 19:17:05.989977  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/client.crt: no such file or directory
E0717 19:17:37.170285  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/client.crt: no such file or directory
version_upgrade_test.go:341: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-629154 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (56.227934721s)
helpers_test.go:175: Cleaning up "missing-upgrade-629154" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-629154
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-629154: (2.04702462s)
--- PASS: TestMissingContainerUpgrade (124.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-404036 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-404036 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (79.779697ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-404036] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16890
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16890-138069/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-138069/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (36.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-404036 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-404036 --driver=docker  --container-runtime=crio: (36.084108263s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-404036 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (36.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (8.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-536750 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-536750 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (221.731128ms)

                                                
                                                
-- stdout --
	* [false-536750] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16890
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16890-138069/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-138069/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 19:14:22.203353  275195 out.go:296] Setting OutFile to fd 1 ...
	I0717 19:14:22.203563  275195 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:14:22.203603  275195 out.go:309] Setting ErrFile to fd 2...
	I0717 19:14:22.203621  275195 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 19:14:22.203933  275195 root.go:338] Updating PATH: /home/jenkins/minikube-integration/16890-138069/.minikube/bin
	I0717 19:14:22.204789  275195 out.go:303] Setting JSON to false
	I0717 19:14:22.206628  275195 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":14213,"bootTime":1689607049,"procs":818,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0717 19:14:22.206752  275195 start.go:138] virtualization: kvm guest
	I0717 19:14:22.209760  275195 out.go:177] * [false-536750] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0717 19:14:22.212070  275195 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 19:14:22.212109  275195 notify.go:220] Checking for updates...
	I0717 19:14:22.214241  275195 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 19:14:22.216752  275195 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16890-138069/kubeconfig
	I0717 19:14:22.218860  275195 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16890-138069/.minikube
	I0717 19:14:22.220592  275195 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0717 19:14:22.223022  275195 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 19:14:22.225139  275195 config.go:182] Loaded profile config "NoKubernetes-404036": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:14:22.225300  275195 config.go:182] Loaded profile config "offline-crio-369384": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.3
	I0717 19:14:22.225385  275195 config.go:182] Loaded profile config "stopped-upgrade-435958": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0717 19:14:22.225528  275195 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 19:14:22.270849  275195 docker.go:121] docker version: linux-24.0.4:Docker Engine - Community
	I0717 19:14:22.270977  275195 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 19:14:22.358408  275195 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:74 SystemTime:2023-07-17 19:14:22.343361687 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1037-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648062464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-13 Labels:[] ExperimentalBuild:false ServerVersion:24.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0717 19:14:22.358671  275195 docker.go:294] overlay module found
	I0717 19:14:22.361858  275195 out.go:177] * Using the docker driver based on user configuration
	I0717 19:14:22.363688  275195 start.go:298] selected driver: docker
	I0717 19:14:22.363712  275195 start.go:880] validating driver "docker" against <nil>
	I0717 19:14:22.363728  275195 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 19:14:22.366768  275195 out.go:177] 
	W0717 19:14:22.368518  275195 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0717 19:14:22.370341  275195 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-536750 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-536750

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-536750

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-536750

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-536750

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-536750

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-536750

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-536750

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-536750

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-536750

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-536750

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536750"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536750"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536750"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-536750

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536750"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536750"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-536750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-536750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-536750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-536750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-536750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-536750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-536750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-536750" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536750"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536750"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536750"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536750"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536750"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-536750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-536750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-536750" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536750"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536750"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536750"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536750"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536750"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-536750

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536750"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536750"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536750"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536750"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536750"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536750"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536750"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536750"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536750"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536750"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536750"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536750"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536750"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536750"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536750"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536750"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536750"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-536750"

                                                
                                                
----------------------- debugLogs end: false-536750 [took: 7.971853097s] --------------------------------
helpers_test.go:175: Cleaning up "false-536750" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-536750
--- PASS: TestNetworkPlugins/group/false (8.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-404036 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-404036 --no-kubernetes --driver=docker  --container-runtime=crio: (5.847255667s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-404036 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-404036 status -o json: exit status 2 (331.900871ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-404036","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-404036
E0717 19:15:01.094279  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/functional-387153/client.crt: no such file or directory
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-404036: (2.157079929s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-404036 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-404036 --no-kubernetes --driver=docker  --container-runtime=crio: (6.96492371s)
--- PASS: TestNoKubernetes/serial/Start (6.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-404036 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-404036 "sudo systemctl is-active --quiet service kubelet": exit status 1 (270.942266ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-404036
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-404036: (1.317638687s)
--- PASS: TestNoKubernetes/serial/Stop (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-404036 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-404036 --driver=docker  --container-runtime=crio: (8.406286284s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-404036 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-404036 "sudo systemctl is-active --quiet service kubelet": exit status 1 (267.035761ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestPause/serial/Start (74.44s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-795576 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-795576 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m14.443769814s)
--- PASS: TestPause/serial/Start (74.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-435958
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (131.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-491051 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-491051 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m11.616687564s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (131.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (55.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-118885 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
E0717 19:18:38.047825  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/functional-387153/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-118885 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (55.860060895s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (55.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-118885 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6caadcc4-a626-4d1f-ab97-bb4f9d2d46e1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6caadcc4-a626-4d1f-ab97-bb4f9d2d46e1] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.013610623s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-118885 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-118885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-118885 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-118885 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-118885 --alsologtostderr -v=3: (11.930115038s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-118885 -n no-preload-118885
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-118885 -n no-preload-118885: exit status 7 (68.321945ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-118885 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (339.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-118885 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-118885 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (5m39.356844161s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-118885 -n no-preload-118885
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (339.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-491051 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5579172a-00d4-46aa-a1eb-fb52a0ff6e97] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5579172a-00d4-46aa-a1eb-fb52a0ff6e97] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.012876773s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-491051 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-491051 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-491051 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-491051 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-491051 --alsologtostderr -v=3: (12.07017193s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (70.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-224587 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-224587 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (1m10.539477389s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (70.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-491051 -n old-k8s-version-491051
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-491051 -n old-k8s-version-491051: exit status 7 (74.317386ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-491051 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (440.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-491051 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E0717 19:20:42.943769  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-491051 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m19.832708118s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-491051 -n old-k8s-version-491051
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (440.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-224587 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [546c264a-4245-4be7-beb2-c18b4a2b7472] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [546c264a-4245-4be7-beb2-c18b4a2b7472] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.013729636s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-224587 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-224587 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-224587 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-224587 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-224587 --alsologtostderr -v=3: (11.868822533s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-224587 -n embed-certs-224587
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-224587 -n embed-certs-224587: exit status 7 (87.769691ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-224587 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (336.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-224587 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-224587 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (5m36.215387621s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-224587 -n embed-certs-224587
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (336.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (69.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-202349 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
E0717 19:22:37.170223  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-202349 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (1m9.614029057s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (69.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-202349 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4c6f1790-468a-4e99-aec4-942ffff430f6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4c6f1790-468a-4e99-aec4-942ffff430f6] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.015307376s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-202349 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-202349 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-202349 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-202349 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-202349 --alsologtostderr -v=3: (11.917868125s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-202349 -n default-k8s-diff-port-202349
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-202349 -n default-k8s-diff-port-202349: exit status 7 (91.679396ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-202349 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (346.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-202349 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
E0717 19:23:38.046740  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/functional-387153/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-202349 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (5m46.275863197s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-202349 -n default-k8s-diff-port-202349
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (346.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-qrzdw" [ff7490f9-fdfc-4455-b54f-9be4c1e137f3] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-qrzdw" [ff7490f9-fdfc-4455-b54f-9be4c1e137f3] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.015426959s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-qrzdw" [ff7490f9-fdfc-4455-b54f-9be4c1e137f3] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0070042s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-118885 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-118885 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-118885 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-118885 -n no-preload-118885
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-118885 -n no-preload-118885: exit status 2 (287.4913ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-118885 -n no-preload-118885
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-118885 -n no-preload-118885: exit status 2 (280.277494ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-118885 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-118885 -n no-preload-118885
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-118885 -n no-preload-118885
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (35.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-593755 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
E0717 19:25:40.219453  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/client.crt: no such file or directory
E0717 19:25:42.943773  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/ingress-addon-legacy-795879/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-593755 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (35.207690389s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (35.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-593755 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-593755 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-593755 --alsologtostderr -v=3: (1.193581046s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-593755 -n newest-cni-593755
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-593755 -n newest-cni-593755: exit status 7 (62.914686ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-593755 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (26.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-593755 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-593755 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.3: (25.992280405s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-593755 -n newest-cni-593755
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (26.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-593755 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-593755 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-593755 -n newest-cni-593755
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-593755 -n newest-cni-593755: exit status 2 (337.495936ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-593755 -n newest-cni-593755
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-593755 -n newest-cni-593755: exit status 2 (326.908267ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-593755 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-593755 -n newest-cni-593755
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-593755 -n newest-cni-593755
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (67.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-536750 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-536750 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m7.923245751s)
--- PASS: TestNetworkPlugins/group/auto/Start (67.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-57jrm" [361e87a0-aeaa-4d8e-86cb-98d6c8035b97] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0717 19:27:37.170739  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/addons-646610/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-57jrm" [361e87a0-aeaa-4d8e-86cb-98d6c8035b97] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.013507216s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-57jrm" [361e87a0-aeaa-4d8e-86cb-98d6c8035b97] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00788109s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-224587 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-536750 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-536750 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-9v4zv" [8bf54932-bdb7-4345-8cd1-e86cf1ab5ea7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-9v4zv" [8bf54932-bdb7-4345-8cd1-e86cf1ab5ea7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.005995433s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-224587 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-224587 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-224587 -n embed-certs-224587
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-224587 -n embed-certs-224587: exit status 2 (300.23921ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-224587 -n embed-certs-224587
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-224587 -n embed-certs-224587: exit status 2 (296.449342ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-224587 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-224587 -n embed-certs-224587
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-224587 -n embed-certs-224587
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-9fwsc" [5830b72c-5122-4cc5-b2be-b6c0930a79d7] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.088906997s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (71.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-536750 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-536750 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m11.02077434s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (71.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-536750 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-536750 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-536750 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-9fwsc" [5830b72c-5122-4cc5-b2be-b6c0930a79d7] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006459994s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-491051 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-491051 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-491051 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-491051 -n old-k8s-version-491051
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-491051 -n old-k8s-version-491051: exit status 2 (323.804902ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-491051 -n old-k8s-version-491051
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-491051 -n old-k8s-version-491051: exit status 2 (308.325624ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-491051 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-491051 -n old-k8s-version-491051
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-491051 -n old-k8s-version-491051
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (68.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-536750 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-536750 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m8.458883103s)
--- PASS: TestNetworkPlugins/group/calico/Start (68.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (60.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-536750 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0717 19:28:38.047137  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/functional-387153/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-536750 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m0.200939869s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (60.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-klw66" [7c2ea030-ca1f-4431-ae5d-4db6db8eac8b] Running
E0717 19:29:11.289505  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/no-preload-118885/client.crt: no such file or directory
E0717 19:29:11.294773  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/no-preload-118885/client.crt: no such file or directory
E0717 19:29:11.305038  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/no-preload-118885/client.crt: no such file or directory
E0717 19:29:11.325318  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/no-preload-118885/client.crt: no such file or directory
E0717 19:29:11.366361  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/no-preload-118885/client.crt: no such file or directory
E0717 19:29:11.446971  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/no-preload-118885/client.crt: no such file or directory
E0717 19:29:11.607383  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/no-preload-118885/client.crt: no such file or directory
E0717 19:29:11.927985  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/no-preload-118885/client.crt: no such file or directory
E0717 19:29:12.569211  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/no-preload-118885/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.020439871s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-536750 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-536750 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-vprhw" [c507f770-2def-4d89-abcb-eb32cf7c800a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0717 19:29:13.850150  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/no-preload-118885/client.crt: no such file or directory
E0717 19:29:16.411335  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/no-preload-118885/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-vprhw" [c507f770-2def-4d89-abcb-eb32cf7c800a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.007000307s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (8.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-t9vgd" [d66a7d9e-9c58-4a2f-83f8-37842a0bec77] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-t9vgd" [d66a7d9e-9c58-4a2f-83f8-37842a0bec77] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.016767483s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (8.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-28nqk" [ad5ee4cc-818f-4e29-be49-48775cd4abaa] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.017852598s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-536750 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-536750 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-zt8sb" [3bda93dc-8f49-4de4-9f76-2cc45cc51efd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0717 19:29:21.532051  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/no-preload-118885/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-zt8sb" [3bda93dc-8f49-4de4-9f76-2cc45cc51efd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.007724798s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-536750 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-536750 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-536750 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-536750 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-t9vgd" [d66a7d9e-9c58-4a2f-83f8-37842a0bec77] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006760441s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-202349 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-536750 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-jjxjd" [274ab2ef-1be0-40d3-9d78-46728b157896] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-jjxjd" [274ab2ef-1be0-40d3-9d78-46728b157896] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.006988527s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-202349 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-536750 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-202349 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-202349 -n default-k8s-diff-port-202349
E0717 19:29:31.773083  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/no-preload-118885/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-202349 -n default-k8s-diff-port-202349: exit status 2 (307.884166ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-202349 -n default-k8s-diff-port-202349
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-202349 -n default-k8s-diff-port-202349: exit status 2 (334.697981ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-202349 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-202349 -n default-k8s-diff-port-202349
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-202349 -n default-k8s-diff-port-202349
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.97s)
E0717 19:30:14.268344  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/old-k8s-version-491051/client.crt: no such file or directory
E0717 19:30:14.273668  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/old-k8s-version-491051/client.crt: no such file or directory
E0717 19:30:14.283989  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/old-k8s-version-491051/client.crt: no such file or directory
E0717 19:30:14.304324  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/old-k8s-version-491051/client.crt: no such file or directory
E0717 19:30:14.344659  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/old-k8s-version-491051/client.crt: no such file or directory
E0717 19:30:14.425067  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/old-k8s-version-491051/client.crt: no such file or directory
E0717 19:30:14.586147  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/old-k8s-version-491051/client.crt: no such file or directory
E0717 19:30:14.907346  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/old-k8s-version-491051/client.crt: no such file or directory
E0717 19:30:15.548393  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/old-k8s-version-491051/client.crt: no such file or directory
E0717 19:30:16.829194  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/old-k8s-version-491051/client.crt: no such file or directory
E0717 19:30:19.389782  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/old-k8s-version-491051/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-536750 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-536750 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-536750 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-536750 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-536750 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (43.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-536750 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-536750 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (43.65683585s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (43.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (58.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-536750 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-536750 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (58.779568943s)
--- PASS: TestNetworkPlugins/group/flannel/Start (58.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (67.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-536750 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-536750 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m7.364675387s)
--- PASS: TestNetworkPlugins/group/bridge/Start (67.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-536750 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-536750 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-76g5f" [09e52af4-f8a8-4fa0-894c-f0d16dc73252] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0717 19:30:24.510766  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/old-k8s-version-491051/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-76g5f" [09e52af4-f8a8-4fa0-894c-f0d16dc73252] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.006505026s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-536750 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-536750 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-536750 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0717 19:30:33.215384  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/no-preload-118885/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-wksc7" [9cb09511-4bb1-4452-9c79-d7537041581b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.015182844s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-536750 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-536750 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-dv2pq" [be566953-df34-4ee6-a143-d10c67fb84a5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-dv2pq" [be566953-df34-4ee6-a143-d10c67fb84a5] Running
E0717 19:30:55.231802  144822 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16890-138069/.minikube/profiles/old-k8s-version-491051/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.006793268s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-536750 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-536750 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-536750 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-536750 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-536750 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-bfvqj" [9dd510e0-026b-43ea-aa49-124897da0143] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-bfvqj" [9dd510e0-026b-43ea-aa49-124897da0143] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.009183347s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-536750 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-536750 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-536750 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    

Test skip (24/298)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.27.3/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-827906" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-827906
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:522: 
----------------------- debugLogs start: kubenet-536750 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-536750

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-536750

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-536750

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-536750

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-536750

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-536750

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-536750

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-536750

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-536750

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-536750

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536750"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536750"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536750"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-536750

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536750"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536750"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-536750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-536750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-536750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-536750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-536750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-536750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-536750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-536750" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536750"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536750"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536750"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536750"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536750"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-536750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-536750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-536750" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536750"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536750"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536750"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536750"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536750"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-536750

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536750"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536750"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536750"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536750"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536750"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536750"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536750"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536750"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536750"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536750"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536750"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536750"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536750"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536750"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536750"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536750"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536750"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-536750"

                                                
                                                
----------------------- debugLogs end: kubenet-536750 [took: 3.895979501s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-536750" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-536750
--- SKIP: TestNetworkPlugins/group/kubenet (4.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-536750 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-536750

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-536750

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-536750

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-536750

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-536750

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-536750

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-536750

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-536750

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-536750

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-536750

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536750"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536750"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536750"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-536750

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536750"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536750"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-536750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-536750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-536750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-536750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-536750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-536750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-536750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-536750" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536750"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536750"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536750"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536750"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536750"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-536750

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-536750

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-536750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-536750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-536750

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-536750

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-536750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-536750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-536750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-536750" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-536750" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536750"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536750"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536750"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536750"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536750"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-536750

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536750"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536750"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536750"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536750"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536750"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536750"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536750"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536750"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536750"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536750"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536750"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536750"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536750"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536750"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536750"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536750"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536750"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-536750" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-536750"

                                                
                                                
----------------------- debugLogs end: cilium-536750 [took: 3.93971136s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-536750" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-536750
--- SKIP: TestNetworkPlugins/group/cilium (4.11s)

                                                
                                    
Copied to clipboard